Open Forum #68 Countering the use of ICT for terrorist purposes

Open Forum #68 Countering the use of ICT for terrorist purposes

Session at a Glance

Summary

This discussion focused on countering the use of information and communication technologies (ICT) for terrorist purposes. Representatives from various organizations, including the UN Counter-Terrorism Committee Executive Directorate (CTED), the Parliamentary Assembly of the Mediterranean (PAM), the UN Office on Drugs and Crime (UNODC), Tech Against Terrorism, and the Global Internet Forum to Counter Terrorism (GIFCT), shared their perspectives and initiatives.

The speakers highlighted the evolving nature of terrorist threats in the digital space, including the exploitation of social media, video games, and emerging technologies like artificial intelligence. They emphasized the need for a multi-stakeholder approach involving governments, tech companies, civil society, and academia to address these challenges effectively.

Key initiatives discussed included CTED’s work on developing guiding principles for member states, PAM’s efforts to promote dialogue and legislation on AI regulation, UNODC’s Global Initiative on Handling Electronic Evidence, Tech Against Terrorism’s focus on disrupting terrorist use of the internet, and GIFCT’s cross-platform solutions for tech companies.

The speakers stressed the importance of balancing counter-terrorism efforts with respect for human rights and fundamental freedoms. They also highlighted the need for improved international cooperation, capacity building for law enforcement and judicial systems, and the development of legal frameworks to address crimes committed through or by AI.

The discussion underscored the critical role of public-private partnerships in countering terrorist use of the internet. Speakers emphasized the need for continued collaboration, knowledge sharing, and adaptation to emerging threats in the rapidly evolving digital landscape.

Keypoints

Major discussion points:

– The increasing use of the internet and emerging technologies by terrorist groups for recruitment, radicalization, and strategic communications

– Challenges faced by governments and tech companies in countering terrorist use of the internet, including legal/jurisdictional issues and capacity gaps

– The importance of public-private partnerships and multi-stakeholder collaboration in addressing these challenges

– The need for improved detection and analysis capabilities, including potential benefits of AI for content moderation

– Concerns about terrorist-operated websites and infrastructure

Overall purpose:

The goal of this discussion was to examine current trends, challenges and collaborative efforts to counter terrorist use of the internet and emerging technologies from the perspectives of various stakeholders including the UN, governments, tech companies and NGOs.

Tone:

The overall tone was serious and focused, reflecting the gravity of the topic. Speakers maintained a professional, analytical approach while emphasizing the urgency of addressing these issues. There was also an underlying tone of cautious optimism about the potential for improved collaboration and technological solutions to make progress in this area.

Speakers

– Jennifer Bramlette: Executive Director of the Counterterrorism Committee Executive Directorate (CTED)

– Pedro Roque: Vice President of the Parliamentary Assembly of the Mediterranean (PAM)

– Arianna Lepore: Terrorism Prevention Branch of the United Nations Office on Drugs and Crime (UNODC)

– Adam Hadley: Executive Director and Founder of Tech Against Terrorism

– Dr. Erin Saltman: Membership and Programme Director from the Global Internet Forum to Counter Terrorism (GIFCT)

Additional speakers:

– Natalia Gherman: Executive Director of the Counterterrorism Committee Executive Directorate (mentioned but did not speak)

Full session report

Countering Terrorist Use of Information and Communication Technologies: A Multi-Stakeholder Approach

This discussion brought together representatives from various organisations to address the critical issue of countering terrorist use of information and communication technologies (ICT). The speakers, representing the UN Counter-Terrorism Committee Executive Directorate (CTED), the Parliamentary Assembly of the Mediterranean (PAM), the UN Office on Drugs and Crime (UNODC), Tech Against Terrorism, and the Global Internet Forum to Counter Terrorism (GIFCT), shared their perspectives on current challenges, initiatives, and collaborative efforts in this domain.

Evolving Threat Landscape

The speakers unanimously highlighted the evolving nature of terrorist threats in the digital space. Adam Hadley from Tech Against Terrorism emphasised a paradigm shift in how we view terrorist use of the internet, framing it as a strategic rather than merely tactical tool. This perspective broadens the scope of the discussion, encompassing not only recruitment and radicalisation but also strategic communications and infrastructure concerns.

Jennifer Bramlette from CTED noted the increasing exploitation of social media, video games, and emerging technologies like artificial intelligence by terrorist groups. The speakers agreed that terrorists are becoming increasingly entrepreneurial and imaginative in their use of technologies, adapting their techniques to evade detection and removal from major platforms.

Challenges in Countering Terrorist Use of ICT

Several key challenges were identified during the discussion:

1. Varying technological capabilities: Jennifer Bramlette highlighted the stark contrast between member states with advanced technological capabilities and those struggling with basic infrastructure, such as providing electricity to police stations. Many states face challenges in incorporating ICT into their counterterrorism systems effectively.

2. Legal and regulatory gaps: Both Jennifer Bramlette and Pedro Roque emphasised the urgent need for updated counter-terrorism laws and regulatory frameworks. Bramlette pointed out that most states lack laws to deal with crimes committed through or by artificial intelligence, raising questions about how to “arrest a chatbot” or “prosecute an AI”.

3. Jurisdictional complexities: The speakers noted the challenges posed by the borderless nature of cyberspace, emphasising the need for cross-border consensus building and clearer international frameworks.

4. Content moderation complexities: Dr. Erin Saltman from GIFCT illustrated the difficulties in content moderation, using the example of distinguishing between a foreign terrorist fighter and “literally just a man in the back of a Toyota”.

5. Balancing security and human rights: The speakers stressed the importance of respecting human rights and fundamental freedoms while implementing counter-terrorism measures online.

Collaborative Initiatives and Approaches

The discussion underscored the critical importance of multi-stakeholder collaboration in addressing these challenges:

1. CTED’s inclusive approach: Jennifer Bramlette described CTED’s efforts to bring together member states, international organisations, the private sector, civil society, and academia. CTED is working on developing non-binding guiding principles for member states on countering terrorist use of ICT and maintains a Global Research Network to foster knowledge exchange.

2. PAM’s legislative efforts: Pedro Roque highlighted PAM’s commitment to fostering dialogue and cooperation towards the regulation of AI and emerging technologies. PAM has created the Permanent Global Parliamentary Observatory on AI and ICT and publishes daily and weekly digests on AI and emerging technologies to keep parliamentarians informed.

3. UNODC’s capacity-building initiatives: Arianna Lepore discussed UNODC’s Global Initiative on Handling Electronic Evidence, which supports criminal justice practitioners. UNODC plans to expand its Practical Guide on Handling Electronic Evidence to include FinTech providers and is developing customized guides for specific countries. Additionally, UNODC is working on updating model legislation on mutual legal assistance to include provisions on handling electronic evidence.

4. Tech Against Terrorism’s technological solutions: Adam Hadley described their Terrorist Content Analytics Platform (TCAP) for identifying and verifying terrorist content online. The organisation maintains a 24/7 capability to respond to major terrorist attacks and focuses on addressing terrorist-operated websites, including challenges related to domain names and hosting. Their threat intelligence team also provides hacking services and technical support to platforms.

5. GIFCT’s cross-platform solutions: Dr. Erin Saltman outlined GIFCT’s efforts to provide tech companies with tools and frameworks for countering terrorist content online. GIFCT maintains a hash-sharing database and an incident response framework. The organization has specific membership criteria and working groups, and plans to host regional workshops for knowledge exchange on local extremist trends. GIFCT also supports academic research through its Global Network on Extremism and Technology and a micro-grants program.

Emerging Technologies: Risks and Opportunities

The speakers discussed the dual nature of emerging technologies, particularly artificial intelligence:

1. Potential risks: Jennifer Bramlette noted that AI could exacerbate online harms and real-world damages.

2. Opportunities for counter-terrorism: Adam Hadley expressed hope that generative AI could improve the accuracy and volume of content moderation decisions.

3. Challenges in incident response: Dr. Erin Saltman raised concerns about the potential for AI-generated fake incident response content, emphasising the need for improved verification processes.

UNODC’s Work on the UN Convention Against Cybercrime

Arianna Lepore highlighted UNODC’s involvement in developing the new UN Convention Against Cybercrime, which aims to address the global challenges posed by cybercrime and provide a framework for international cooperation in this area.

Unresolved Issues and Future Directions

Several key issues remain unresolved and require further attention:

1. Regulation of terrorist-operated websites and domain names

2. Addressing jurisdictional complexities in cyberspace

3. Developing laws to deal with crimes committed through or by artificial intelligence

4. Balancing content moderation and free speech concerns

5. Verifying information during incident response in the age of AI-generated content

The speakers suggested potential compromises, such as using both list-based and behaviour-based approaches to identify terrorist content online, balancing technological solutions with human input and context for content moderation, and considering both risks and opportunities of emerging technologies in counter-terrorism efforts.

Conclusion

This discussion highlighted the complex and evolving nature of terrorist use of ICT and the need for a comprehensive, collaborative approach to address these challenges. The speakers emphasised the importance of public-private partnerships, international cooperation, and adaptive strategies to keep pace with technological advancements. As the digital landscape continues to evolve, ongoing dialogue, knowledge sharing, and collaborative efforts among diverse stakeholders will be crucial in effectively countering terrorist use of the internet and emerging technologies.

Session Transcript

Jennifer Bramlette: Just doing a mic check. Good afternoon. It’s working. Excellent. Yes. Mic check. Mic check. Everybody can hear. Excellent. Distinguished colleagues, good afternoon and welcome to all here in the room and joining us virtually for this IGF Open Forum on Countering the Use of Information and Communication Technologies, or ICT for Terrorist Purposes. I welcome you on behalf of the Executive Director of the Counterterrorism Committee, Executive Directorate, Assistant Secretary General Natalia Garibay. It is a great pleasure to hold CTED’s first session at an IGF event here in Riyadh. And it’s an honor to be here today with some of CTED’s close operational partners, the Parliamentary Assembly of the Mediterranean, or PAM, the Terrorism Prevention Branch of the United Nations Office on Crime, or UNODC, Tech Against Terrorism, joining us virtually, and the Global Internet Forum to Counter Terrorism, or GIFTT, also joining us virtually. I would like to begin this session by explaining the work of CTED. As a special political mission supporting the Security Council’s Counterterrorism Committee, CTED is mandated to conduct assessments of member states’ implementations of United Nations Security Council resolutions on counterterrorism on behalf of the Counterterrorism Committee. In this work, CTED identifies good practice and also gaps in implementation for which CTED works with partner organizations and states to facilitate technical assistance. CTED is additionally mandated to identify emerging trends and evolving terrorism threats, including through collaboration with the members of CTED’s global research network. Terrorist groups and their supporters continue to exploit the internet, social media, video games, and other online spaces, as well as emerging technologies, to engage in a wide range of terrorist-related activities. Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism. When it comes to countering terrorism and violent extremism conducive to terrorism, the United Nations Security Council has developed a robust framework of resolutions and policy documents. The Council has adopted 16 counter-terrorism-related resolutions and five policy documents over the past 23 years that specifically address ICT and now emerging technologies. Through these, the Council has mandated CTED to work on a growing list of increasingly complex and technologically advanced issues relating to countering the use of ICT and other emerging technologies for terrorist purposes. As such, CTED has mainstreamed ICT-related issues, including now AI and other emerging technologies, into its workstreams. In our capacity to identify new trends and emerging threats, CTED draws attention to how exponential leaps in the development and applicability of digital tools and emerging technologies could enhance terrorist capabilities. CTED also identifies what legal, policy, and operational measures UN member states could implement and how they could use new technologies to increase the effectiveness of their counter-terrorism efforts. For example, the 2022 Delhi Declaration tasked CTED to develop non-binding guiding principles for member states to counter the use of Unmanned Aircraft Systems, or UAS, new financial technologies, and ICT for terrorist purposes. The Abu Dhabi Guiding Principles on threats posed by the use of Unmanned Aircraft Systems for terrorist purposes were adopted in December, 2023. The committee is currently negotiating the guiding principles on new financial technologies and will turn its attention to the ones for ICT. In carrying out its various activities, CTED holds two main principles at the forefront. Firstly, we draw particular attention to respect for human rights, fundamental freedoms, and the rule of law in the use of ICT and new technologies by states when countering terrorism. We also promote whole-of-society, whole-of-government, and gender-sensitive approaches as essential components for successful counterterrorism efforts. Secondly, we consistently emphasize the need for cooperation, collaboration, and partnerships. CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment. It is also necessary for member states to develop holistic, effective, and technologically advanced counterterrorism regimes. I will further detail CTED’s work on ICT in the technical panel, but now it is my great pleasure to welcome the Honorable Mr. Pedro Roque, the Vice President of the Parliamentary Assembly of the Mediterranean and one of our longstanding partners in the fight against terrorism to take the floor. Sir, I yield the floor to you.

Pedro Roque : No, you can hear me now. I think now it’s fine. Thank you so much. So, ladies and gentlemen, dear friends, it is an honour and a pleasure to address the opening of this event. PAM, the Parliamentary Assembly of the Mediterranean, values the most the fruitful cooperation with CTED, which resulted in the invitation to PAM in order to join the CTED Global Research Network, as well as a few other significant outcomes that I will mention during this intervention. I wish to thank also the colleagues of UNODC, the Global Internet Forum to Counter Terrorism, and the Tech Against Terrorism for all the work you do with AI and ICT. PAM is an international organisation which gathers 34 members of the European Parliament. PAM is an international organisation which gathers 34 member and associate national parliaments from the Euro-Mediterranean and Gulf regions. At present, PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard. If not properly regulated in a timely and effective way, The rapid advancement of AI and emerging technologies could severely harm democratic systems, disrupt societal structures, and pose significant risks to security and stability. Concrete actions and legislative frameworks for regulating AI and ICT should build on a multi-stakeholder collaboration while ensuring compliance with international human rights law and the protection of individuals’ fundamental freedoms. At the request of the UN Secretary General, PAM actively participated and contributed to the preparations of the UN Summit of the Future, held in New York last September. In conjunction with the summit, PAM also organized a high-level side event on parliamentary support in re-establishing trust and reputation in multilateral governance. This event was held in cooperation with the CTED and the permanent missions of Morocco and Italy to the UN and the Inter-Parliamentary Union. To achieve this objective, PAM parliaments committed to implementing the actions outlined in the Pact for the Future, particularly in its annex, the Global Digital Compact. This includes promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments, as well as evaluating their immediate and long-term risks and opportunities. Dear friends, through 2024, PAM experts, supported by our Center for Global Studies, CGS, and in partnership with CTED, devoted a major part of their work on monitoring, analyzing the developments of AI. and emerging technologies, as well as they abused by terrorists and criminal organizations. PAM-CGS has produced and recently released a report entitled The Malicious Use of AI and Emerging Technologies by Terrorists and Criminal Groups Impact on Security, Legislation and Governance. This comprehensive research project, drafted in partnership with CTED, not only benefited from first-hand insights by PAM member parliaments, but also went through a rigorous peer-review process conducted by several PAM strategic partners, including, among others, Amazon, Interpol, Média Duemila, a network of national international media organizations, NATO, the Policy Center for the New South and UNOCT. The main outcomes of the reports are, in first, the creation of the PAM Permanent Global Parliamentary Observatory on AI and ICT, designated as a platform to monitor, analyze, promote and advocate for effective legislation, principles and criteria. The observatory is located in the Republic of Samarino and is supported by PAM-CGS. In second, the publication of a daily and weekly digest, compiled from open sources, providing PAM parliaments and stakeholders with up-to-date news and analysis on trends in AI and emerging technologies. The digest covers key areas of interest, including governance, security, legislation, defense, intelligence and warfare. In conclusion, I would like to highlight two important resolutions that PAM parliaments adopted during the 18th PAM plenary session held in Braga, Portugal in May 2024. One resolution focused on digitalization, emphasizing the need to bridge the digital divide and promote equal access to digital technologies both across and within PAM countries. It also acknowledges the role of digital transformation in advancing the achievements of the UN Sustainable Development Goals. The second resolution addresses artificial intelligence, urging the allocation of resources to advance AI research and development, with an emphasis on fostering innovation while safeguarding human rights, fundamental freedoms, privacy protection and non-discrimination. PAM will further explore these issues at its 19th plenary session scheduled for February 2025 in Rome and during its new tenure as the Presidency of the Coordination Mechanism of Parliamentary Assemblies on Counterterrorism, including its political dialogue pillar. Additionally, I would like to inform you that PAM-CGS is currently working on two new reports. One focuses on the resilience of democratic systems in relation to the misuse of AI and new technologies and another, at the request of CTED, on the use of spyware and its legislative regulation. PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world. I thank you for your attention.

Jennifer Bramlette: and context about the fight against terrorism and the malicious use of artificial intelligence, both from a cross-regional perspective and from the perspective of key government actors and partners, namely parliamentarians. And I don’t know if anybody in the room has been able to sit in on any of the parliamentarian track that’s happening way down at the end of the far corner, but the speakers there are phenomenal, the parliamentarians present are so engaged and it is essential to have all of government on board, including the elected officials. So as I, wearing the hat of the CTED executive director mentioned, I’d like to come back to the technical aspects of CTED’s ICT mandate as given by Security Council resolutions. And perhaps would somebody be kind enough to shut the door? Not that it’ll block the microphone from the other events that much, but that’s great. Thank you so much. So some of our mandates, I mean it’s a very widespread mandate that we have for ICT. The specifics of it include preventing the use of ICTs for terrorist purposes, including for recruitment and incitement to commit terrorist acts, as well as for the financing, planning, and preparation of their activities. We have mandate for countering terrorist narratives online and offline, gathering, processing, and sharing digital data and evidence, cyber security, but only in relation to the protection of critical infrastructure, and countering the financing of terrorism via new financial technologies and payment methods like crowdfunding. CTED is additionally looking at new trends and evolving threats in terrorist use of ICT to include threats and risks relating to advances in AI, the role of algorithmic amplification in promoting harmful and violent content, the misuse of video gaming platforms and related spaces, and risks associated with terrorist exploitation of dual-use technologies like 3D printing. and advanced robotics. As part of its work on human rights and fundamental freedoms, CTED addresses areas related to the programming behind AI and algorithm-based systems to ensure that it does not include bias, for example. We also look at privacy, data protection, and the lawful collection, handling, and sharing of data, and transparency and accountability for governments and the tech sector when it comes to content removal practices and data requests. Through its many assessment visits, CTED has noted that member states face a range of challenges when it comes to countering the use of ICT for terrorist purposes. Many of these stem from the sheer numbers and diversification of users across a multitude of decentralized online spaces and using a myriad of digital tools. So the rapid increase, availability, and technological capabilities of AI and other emerging technical tools, and of course the continued social, economic, and political drivers of violence, extremism, and terrorism. The three together make a perfect storm for terrorists being able to operate with sometimes seeming impunity with many of the challenges that member states are facing. And where some of these challenges really come to bear is how they’re incorporating ICT into their own counterterrorism systems, both in consideration of their existing resources and capabilities and in respect to compliance with their obligations under domestic and international human rights law. So for example, there are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations. So, as technology increases, this gap is widening. One of the biggest capacity gaps we note from our dialogue with member states is a shortage of tech talent and cutting-edge equipment in government entities. Issues of how to build that tech talent and then attract it into government positions and then retain it when the private sector and other avenues offer greater financial rewards are pressing questions, and there are no simple or inexpensive solutions. Another common shortfall observed in many states is that the criminal justice systems, especially in traditional criminal justice systems, they’re just not designed to address crimes committed in online spaces or through cyber means. So where you have countries who are still meeting in courtrooms without video cameras, without screens, without a capacity to handle electronic evidence or do video interviews, it’s almost impossible for them to prosecute crimes that are committed online where you are entirely reliant on the admission of electronic evidence and other digital tools and digital forensics to build a case and for a judge to try it effectively. Also, most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI? And those are really good questions, and there are no templated answers. Perhaps Ari can talk about if there are any plans for UNODC or any other entity to build a model law. There are also jurisdictional complexities in cyberspace. For example, gray area content could be illegal in one country, but not in the countries bordering it. And so like the examples outlined by Pam, many states are working together to build a cross-border consensus and to implement multilateral legal and operational frameworks to deal with these and many other ICT related challenges. CTED, the Counterterrorism Committee and the UN Security Council are also working through their international frameworks and multi-stakeholder processes to help states address these challenges. In developing the non-binding guiding principles for member states on ICT, CTED collaborated with over 100 partner agencies, including law enforcement and security services, legal and criminal justice sectors, capacity building entities, the private sector, technological companies, academia and civil society organizations to gather good practices and effective operational measures for ICT and emerging tech. Some of the areas addressed by the draft guiding principles include the conduct of regular risk and readiness assessments. This is something that has been identified as good practice but not nearly enough member states do it. They might do it once, they might not do it at all, but very few conduct regular risk and readiness assessments. And by readiness assessments, I mean a state looking at its own capacities, its own resources, and it’s a future look as to whether or not what it has ordered through its procurement processes, is going to be useful when it finally gets delivered at three years down the road. Other areas of the guiding principles include the need for updating counter-terrorism laws and regulatory frameworks, obviously. The development of guidelines for strategic communications and counter messaging algorithms. This is both for states and for the tech companies. The creation of content moderation and cross platform reporting mechanisms and recommendations for online investigations and how to more effectively and lawfully handle digital evidence. CTED cataloged these effective practices. and noted a number of other ones relating to safety by design, ethical programming, and the conduct of security and human rights impact assessments for AI and algorithm-driven systems. We also captured the positive impact already demonstrated by investment in digital and AI literacy programs for all levels of society. We further developed the guiding principles to ameliorate a range of concerns about the serious adverse effects on human rights that the use of new technologies by states without proper regulation, oversight, and accountability is having. I’d like to conclude by highlighting that many Security Council resolutions and the Delhi Declaration stress the importance of partnerships, in particular, public-private partnerships. CTED actively cooperates with Tech Against Terrorism, the Christchurch Call, and with the industry-led Global Internet Forum to Counter Terrorism, two of which are up next in our panel. And I’d like to now turn the floor to Ariana Lepore from the Terrorism Prevention Branch of the United Nations Office on Drugs and Crime, another close operating partner and dear friend, to discuss the work of the TPB on ICT and electronic evidence. Thank you.

Arianna Lepore: Thanks very much, Jennifer. Thanks for inviting us, UNODC, here. We have a long-standing partnership with CTED, as well with Pam and colleagues Irene and Adam. So it’s great to pick up from where you left it the importance of partnership. And we hope that also here in this forum we are able to establish contacts and continue our dialogue together. The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism. UNODC operates under Security Council resolutions, the 19 Conventions, the Secretary-General Action Plan on CVE, so we have a mandate which is very stringent, and since a few years now, we have been working very much, sparing no efforts on the issue of ICT, and as we go forward, we are expanding and delineating new strategy on how to deal with emerging technology. It was back seven years ago, in 2017, in the aftermath of the adoption of Resolution 2322 that was devoted and requested member states to increase the level of international cooperation, in particular in handling of electronic evidence, was back then that UNODC launched what we call the Global Initiative on Handling Electronic Evidence, which I coordinate. The Global Initiative on Handling Electronic Evidence was conceptualized with colleagues at CEDED and with the International Association of Prosecutors, and now it’s a flagship project of UNODC. The Terrorism Prevention Branch sits in Vienna, but UNODC has regional offices, country offices, including here in Saudi Arabia, and my colleague is the head of the office here, so we have the capacity to reach out at the ground level and create very close relationship with the practitioners we work with. So the Global Initiative was launched seven years ago. The purpose was exactly that. First of all, to foster public-private partnership, and it was thanks also to the efforts of CEDED and our efforts to work closely with the private sector that the initiative… is a fully-fledged project that has a holistic approach, so involves the private sector, involves the experts, involves the practitioners, the academia, and we developed different streams of work. The goal is to support law enforcement, prosecutors, judges, central authorities, competent authority for international cooperation in the preservation and production of electronic evidence for criminal cases. How did we do that? Through the development of tools, which is our bread and butter, including the development of model legislations, of course. Now, I’d like to focus our attention to which is the one that is the main tool of this global initiative, which is the Practical Guide on Handling Electronic Evidence. It has been an extensive work done with colleagues at CEDED, with colleagues that represented the tech industry, and it’s a guide, technically a guide, a manual, that step-by-step informs criminal justice practitioners on how to request for preservation of electronic evidence, emergency disclosure, voluntary disclosure, and where, depending on the data that is requested, direct requests are not possible, how to begin a formal mutual legal assistance process. It contains a mapping of now more than 100 service providers. At the moment, ICT service providers. Nevertheless, just last week in Vienna, we conducted the very first Esper Group meeting in order to include the FinTech providers, and so to create the link between also electronic evidence, but also financial electronic evidence, because we heard from practitioners that this is more and more an emerging need to connect the two. So very soon, we will have an annex to the guide that will also contain a mapping of VASPs, FinTech providers, and how to approach them, request for preservation. preservation, disclosure, and so forth, and all the procedures that entails. The guide also contains model forms on how to request the private sector those informations because back then, we were hearing the, and see that in particular, the complaints of the criminal justice officials, they would send requests to the private sector, requests were never answered, but then we spoke with the private sector, and they say the type of requests they would receive were impossible to be answered, tetra and tetra byte of material, 10 years of evidence being requested, impossible. So we tried to seed them all around the table, and we developed forms which contains diligently all the elements that would enable the private sector, the providers to respond. So the guide is the main tool around which all the capacity building support that UNODC offers is constructed. Now, the guide is global in nature, but more and more advancing in our program, we have customized the guide, tailored it to specific member states that they have requested. So we have a customized guide for Pakistan, for India, for the Maldives, and we keep counting. So member states will come to us, and then we will do a thorough research on the procedural law, the legislation, and then instead of quoting worldwide legislation, we would design a guide specific for that country. And this is one of the priority of UNODC, to make our work sustainable, we also develop train-the-trainer modules on the practical guide, so that we could embed this guide within training institutes so that the transmission of knowledge is up and running. The issue of the model legislation that was mentioned by Jennifer is fundamental. UNODC does that in the context of its work in all crime types, but in specific, it was in 2021, we have updated the UNODC model legislation on mutual legal assistance, which now contains provision on handling, receiving, and transmitting electronic evidence. So when countries are up to updating their MLA law, they can go. to us, request assistance and see what type of provisions we have put together. All of these, all those tools are available on our platform. We have created an electronic evidence hub, but we have not stopped there. There are two last points which I’d like to make and the first one is that, as Jennifer will say and colleagues would probably also, introduces the fact that technology is advancing. Jennifer mentioned some of the challenges that we will face. We are already facing them, artificial intelligence and all those emerging technologies. So UNODC is also expanding a strategy, how to go about this. So to counter the misuse of technology, but also to utilize technology to counter terrorists. So there is this dual challenge that UNODC will try to address and you will hear more about our interventions. And last but not least, a word on the new convention on cybercrime. It’s known to everyone in the next few days, possibly for sure the text of the United Nations Convention Against Cybercrime will be adopted by the UN General Assembly. Now it’s a convention on cybercrime, nevertheless there is an important segment in it, in the draft as it is now, that speaks about electronic evidence. So obviously that will also inform the work that we are doing and we will monitor closely how the adoption goes and what will be then the next steps, when it will be ratified, the protocols and so forth. So Jennifer, I would stop here and I thank you for this opportunity.

Jennifer Bramlette: Thank you very much, Arianna. I would now like to turn the floor to… Mr. Adam Hadley, CBE, who is the Executive Director and Founder of Tech Against Terrorism. Adam, the floor is yours.

Adam Hadley : Jennifer, thank you very much. Can you hear me well from there? Yep, great. Wonderful. Well, thank you very much for having us today to present about the work of Tech Against Terrorism and some of our concerns at the IGF. We certainly consider the IGF to be a vital forum to discuss important matters such as the terrorist use of the internet. I’d like to frame our discussion around a paradigm shift in how we view the terrorist use of the internet. Historically, the terrorist use of the internet has been seen as a tactical tool for recruitment and radicalization, but increasingly our concern is that the internet is becoming a strategic battleground for terrorists and hostile nation states, but mainly for terrorists. So as well as sharing three critical challenges that we see at Tech Against Terrorism, I’ll outline one positive potential for generative AI, and then suggest a need to focus on countering the terrorist use of the internet infrastructure, in particular terrorist-operated websites. So who are we at Tech Against Terrorism? What’s our mission? Well, our mission is to save lives by disrupting the terrorist use of the internet, and we’re proud to have been established by UNCTED way back in 2017 as a public-private partnership focused on bridging the divide between the private sector and the public sector. Accordingly, we’ve been recognized by a number of Security Council resolutions, and as Jennifer mentioned, the Delhi Declaration. Most recently, we’ve been referenced in a Security Council resolution 2713, encouraging Tech Against Terrorism to support government of Somalia in countering the use of the internet by Al-Shabaab. We were established, as I said, to improve connections between companies and governments. We’re a small, independent NGO based in London and we work across the entire digital ecosystem. Effectively, we aim to understand where the terrorists are using the internet and what practically can be done about this. We’re global in approach and we have 24-7 hour coverage. I’d also like to recognise the great efforts by many other organisations, of course, as well as UNC TED, there’s the EU Internet Forum, there’s the Christchurch Call to Action, there’s our partner initiative called Tech Against Terrorism Europe, TATE, funded by the EU, there’s the Extremism and Gaming Research Network, Institute of Strategic Dialogue and, of course, the GIFT-CT. I’m delighted that Erin is able to join us from the GIFT-CT in a few moments. At Tech Against Terrorism, we focus on the most egregious examples of terrorist use of the internet. It’s a very important thing to stress that we predominantly focus on those terrorist organisations that have been designated by the UN, the US, the EU and other international bodies. This doesn’t mean that we don’t focus on the broader range of activity that terrorists conduct online, but rather that we believe it’s important to focus where there’s consensus, recognising that it is in that focus where we will be able to have the most impact. In terms of the teams at Tech Against Terrorism, we have our own threat intelligence team, Open Source Intelligence, we work with governments and platforms to build capacity, and we also develop technology to speed up the ability of our analysts and others to detect the terrorist use of the internet. In doing all of this, we aim to share resources cost-free in a collaborative way, and we have a number of resources that are available to platforms and governments, such as the knowledge sharing platform. We also provide hacking services and other technical support services to platforms, including a trusted flagger portal. Now, in terms of the current landscape, What we’d argue is that currently we’re seeing some of the most egregious examples of terrorist-suited internet in the last decade. Of course, platforms and governments face many threats. The geopolitical instability now is the highest it has been for many decades. And therefore, understandably, platforms and governments have many concerns to focus on. But what is certain is that counterterrorism is no longer the primary concern of many of these stakeholders. And arguably, it should be. Since October 2023, we’ve seen terrorist content online reach unprecedented levels. From terrorist organisations such as the Islamic State, al-Qaeda, also the Houthis, Hamas, Hezbollah, al-Shabaab. Quite frankly, the terrorist use of the internet now is at such high levels that we’re really not sure what to do about it alone. And therefore, we call for improved action from the tech sector, from governments and from others, to ensure that the correct and appropriate level of resources are being brought to bear to tackle this. In our view, this threat is manifest, of course, offline more than anything else. We know that terrorist groups are regrouping. We know that attacks in Africa are very high. We know the risks from coming from Central Asia with regards to ISKP. And their use of the internet is commensurate with this increased threat. The question is what to do about it. So at Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community. It’s great that there are industry-led initiatives like the GIFCT investing so much in this space. And we encourage the GIFCT to continue to do this in the future. and for the GIFT-CT to be continued to be funded by the tech sector. At Tech Against Terrorism, we currently alert more than 140 platforms and we work with a range of stakeholders, governments and tech companies and that’s how we’re funded, in quite an independent and transparent fashion. So we see there are three key challenges. The first, as alluded to by Jennifer at UNCTED just now, is around strategic communications. Historically, the terrorist use of the internet has been considered in quite a tactical way. What this means is that the terrorist use of the internet has been considered purely in terms of radicalisation and countering this, but we’d also argue that terrorist groups use the internet for strategic communications purposes and most terrorist organisations are looking to have a political effect. They’re looking to promote their domestic popularity or to project international standing and therefore we think it’s paramount to ensure that the way we counter the terrorist use of the internet doesn’t just think about radicalisation and recruitment and incitement but also the political value of that speech. If terrorists are able to share their messages on social media, messaging out on their own websites, this is worth a lot to them strategically and therefore in the context of hybrid warfare, countering terrorist strategic communications is of vital importance. The second challenge is around infrastructure. Quite rightly we talk about the tech sector and the tech sector has done an enormous amount over the years as supported by the GIFT-CT, but we mustn’t forget other sources of terrorist activity online. Terrorists now can create their own websites, their own apps, their own technologies. This presents a number of jurisdictional challenges, in particular at the governance level of the internet. Can terrorists and should terrorists be allowed to run their own websites? Should ISIS or Al-Qaeda have the right to buy their own domain name? If not, what should we do about it? Unfortunately, this is not a theoretical issue. We are seeing hundreds of these websites being set up and often it’s extremely difficult working with internet providers because of ambiguities around jurisdiction. What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques. They potentially hide their content and become better at evading automated responses. This is not a criticism of the tech sector at all. I’m merely highlighting the formidable challenge that platforms have in keeping ahead of an extremely sophisticated adversary. But the infrastructure is something that I wanted to bring to the attention of the IGF because surely more needs to be done to establish international frameworks where we have designated terrorist organisations buying domain names and buying hosting for their websites. The third challenge is about detection and analysis of terrorist content. There is a very large amount of terrorist content online. It somewhat paradoxically is hardest to analyse this on large platforms. The reason being, for data privacy reasons and other perfectly reasonable explanations, very large platforms are not easy to analyse at scale. What this means is that analysis of small platforms is easier. Analysis of larger platforms is more difficult. Therefore, we ask for improvements in data access, but we recognise some of the challenges in terms of data privacy where that’s concerned. We commend platforms for doing what they can. to share more about their activities in very often comprehensive transparency reports. So moving to the end of my intervention here, I certainly recognise the expert opinion that’s being shared about the risks associated with AI and generative AI. We would argue, however, that generative AI also provides a significant opportunity to improve the accuracy and volume of content moderation decisions online to ensure that terrorist content can be detected at scale accurately. The accuracy is very important because in everything we do at Tech Against Terrorism and UNCTED, and I believe the GIF-CT, of course we have to counter the terrorist use of the internet, but we have to ensure that fundamental freedoms and human rights are upheld. I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations. So looking towards 2025, underlying threats are increasing. They’re increasing internationally and domestically. Internationally we have IS, we have al-Qaeda, we have al-Shabaab, we have many other terrorist organisations committing acts of violence in person, offline. We’re also seeing in a number of countries increased youth involvement in terrorist activities for reasons not fully understood, and we’re seeing terrorists get better at exploiting grievances regarding geopolitical instability and state failure, and the role of the internet is only becoming more and more important in this. But yet, geopolitically, there is a risk that consensus about jurisdiction, where the internet is concerned, is going to reduce over time. There is a very real risk that the very time we need increased consensus globally about internet governance, that this may be more difficult to achieve because of geopolitical tensions. Our work at Tech Against Terrorism will continue. We’re a small NGO of around 10 people. We are hoping that our 24-7-hour capability will help in responding to major terrorist attacks. We’ll be launching our TrustSMART and a number of other services in support of the tech sector and governments. Concluding my remarks, I would like to emphasise that it’s also important to talk about the infrastructure there, and in particular terrorist-operated websites. How can it be right for designated terrorist organisations to have the right to create top-level domain names? In fighting this, we would ask for improved clarity about jurisdiction and standardisation of responses. We commend the Somali government for doing such good work in this space and would encourage others to follow the model that the Somali government is doing in taking down content and activity by al-Shabaab. So the Internet’s role in global security has never been more critical. As we face the challenges of the next year, we believe that responding to the terrorist use of the Internet will be vital to ensure global stability. The question is not whether we can stop terrorists using the Internet, but what we can do together in a collaborative way, upholding fundamental freedoms to push back against terrorist content and activity online. Thank you very much for your attention to these critical matters. And I will yield the floor to UNCTED. Thank you very much.

Jennifer Bramlette: Adam, thank you very much. As with the intervention from TPB, I’m not even going to try to summarise what you said. And in the interest of time, I want to make sure that Dr. Saltman has a full measure to talk about the work of GIF-CT. So Dr. Saltman, the membership and programme director from the Global Internet Forum to Counter Terrorism, you now have the floor.

Dr. Erin Saltman: Thank you so much. And it’s always a pleasure to go last, to not have to repeat any of the wonderful and very timely points that my colleagues have made. But many thanks to UNCTED as well as the IGF for hosting a session on this topic and for allowing us to dial in virtually for those of us that couldn’t attend in person. We have a bit of FOMO. We wish we were in the room with you. I want to talk a little bit about what GiveCT does, who we are for those that don’t know us very well, and try to leave some room for questions too. If you don’t know about the Global Internet Forum to Counter Terrorism, it was mentioned we’re a little bit of a unique NGO. We are a non-profit, but we were in fact founded by tech companies to help tech companies counter terrorism and violent extremism, but with multi-stakeholderism built into our governance and our programmatic efforts. Just like terrorism has always been a transnational effort, it is also a very cross-platform effort, and I’ll bet very few people in the room have just one app on their phone, so we should be educating ourselves and looking at normative behaviors online and realizing that bad actors, terrorists, and violent extremists are also very cross-platform, as Adam mentioned, in many of their efforts. With that, we realized we needed a safe space for tech to work together. Our efforts are broken down into roughly four buckets. One really is cross-platform tech solutions. I’ll speak briefly to that. One is about incident response, where increasingly there are offline, real-world attacks and events taking place where the perpetrators and accomplices are using online aspects or assets to further the harm of their terrorism. We also want to further research and knowledge sharing, as well as information exchange and capacity building, and that includes work with governments and civil society so that knowledge exchange is really holistic, because the signals that a tech company is seeing are distinctly different to how law enforcement might be approaching it or, on the ground, how civil society is experiencing it. Because we share such sensitive information and provide a platform for information sharing around such time-insensitive issues, we do have a membership criteria, which is also a little bit unique. You can’t just come in the door and work with us. You have to meet a threshold, and this was built out in consultation with our independent advisory committee that includes UNC-TED, among other government and non-governmental officials and experts. and this includes making sure that tech companies that work with us have things like an annual transparency report, have a public commitment to UNDP-related and guiding principles around human rights, make sure that they have clear terms of service, make sure that they have the ability to report something like terrorist content, and we take for granted on social media largely that you can report and flag content, but obviously in other platforms like terrorist-operated websites, perhaps to Adam’s point, or certain gameplay spaces, it might not be intuitive how you would flag to a platform or to authorities a terrorist or violent extremist signal that you’re seeing. So once you become a member of GIF-CT, things around cross-platform tech solutions do include a scaled hash-sharing database where GIF-CT and our member companies can ingest hashed content of terrorist and violent extremist material when it fits our criteria. And there were some questions in the chat here around defining terrorism, which again a million PhDs and a million more are needed on this topic. There is not conclusive agreement, but when we talk to tech companies that were members, and we have 30-plus member companies, which include your largest ones like your Microsofts, your Amazons, your Metas, your Googles, but also smaller or medium-sized companies like JustPasted or Discord and Twitch or Zoom, companies that never thought they’d have to come to the table and talk about terrorism as a topic until they realized exploitation was happening. And so when we talk about hashing terrorist content, we began with a list-based approach. Companies do have consensus where they look to the UN designation lists around terrorism and look at terrorist individuals and groups and can find common ground there. But we realized very quickly, and in consultation with human rights experts and civil society organizations, there is an Islamist-extremist bias in most lists in a post-9-11 framework. and we wanted to get at some of the neo-Nazi and white supremacy attacks that we know are taking place in different parts of the world. And so we also started building in behavior-based buckets. When you look at online content, a list doesn’t always cut it. It’s not always clear a group affiliation or a card-carrying membership for how terrorism, and especially lone actor terrorism, takes place. So our behavior-based buckets include things like hashing attacker manifestos, where that content in and of itself is justifying a terrorist attack, or things like branded terrorist and violent extremist content that gets not only at some of these Islamist publications, but at some of these white supremacy and neo-Nazi related and otherwise other forms of violent extremist publications online. And every year we have an incident response and a hash-sharing form of working group that is multi-stakeholder to constantly evaluate and say, where can we go further? Should we expand this taxonomy? If we expand inclusion, would that impede on free speech and other human rights concerns? And so this is an iterative and evolving process over time. Hashing content also evolves in form. When we think content, we often think image or video, but in fact, at a managed t-cap, that’s for flagging urls. Or when we see a terrorist attacker manifesto, that’s usually in pdf form. So the forms of content that can be hashed have also had to evolve over time. On top of this cross-platform tooling, incident response, particularly as a critical point after the Christchurch attacks in New Zealand, meant that tech companies really wanted to work together to stop the viral spread or the perpetrator-related content in and around an attack. Not every single event will have a live stream, but we have seen since the Christchurch event that there are a number of lone actor or otherwise planned attacks that do have these online aspects at play, such as a live stream, such as the publishing of a manifesto, or even in Halle, Germany in 2019, the pdf publishing of a how to 3d print a gun so again these are all assets in and around and we want to be able to hash and share that and so our incident response framework allows us to increase knowledge sharing make communication with affected governments and law enforcement where appropriate and share communication and verified information we’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts when we think of adaptation this is where knowledge exchange and active learning and training and capacity building between sectors is really critical we do fund an academic wing of our work the global network on extremism and technology and while this is accessible to everyone the insights coming out of there are allowed to support with micro grants academics and experts around the world that have their finger on the pulse of extremist trends this could be anything from again ai generated content an entire insight series on that 3d printing and some of the concerns about how that is assisting and aiding terrorism and violent extremism gaming and gaming adjacent platforms what those signals look like when it is in fact the modification of characters whether it’s just should or you have a policy that allows you to name a player adolf hitler or not these are things that tech companies are asking and looking for policy guidance around across these sectors and so when it comes to knowledge exchange the smallest little trends being shared can really have an amplifier effect for tech companies to understand what harm and threat might look like on their platforms along with this we really want to understand different parts of the world and how violent extremism and terrorism is manifesting There are some very broad stroke global trends, but when we look at how extremists and terrorists use coded language, this is very colloquial specific when we see how memes and icons and imagery is used to evade detection, this is very local context specific. So on top of the technology, which helps us get to scale and speed, we really do need the context that sits around what you might surface and see as a moderator. Even a standard agreed upon entity like Islamic State, if I were to have you surface an image and it’s a guy in the back of a Toyota, it’s really hard to know if that is foreign terrorist fighter imagery or if that is literally just a man in the back of a Toyota. And the same goes with a lot of different forms of violent extremist trends. And so when we sit alongside the technological solutions, we will still need that human input, we will still need that cross-sector knowledge sharing. We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression. We’ve also ensured that we have to be on the ground. Not everything can be done over Zoom. Fortunately or unfortunately, we do host workshops in different parts of the world, and we have made sure that we are working with ground-based partners and governments in order to have nuanced dialogues, not just imparting the knowledge we have about trends online, but gaining valuable feedback on what these trends look like in specific regions. Earlier this year, we hosted workshops in Brazil for Latin America, as well as most recently in Sweden at the Nordic Democracy Forum. And next year, we’ll be working with the IIJ in Malta to convene around sub-Saharan Africa. And so if anyone wants to follow up and work with us and see where we can come and bring a two-way knowledge exchange, and make sure that the lessons are learned on both sides. I’d really love to further that as we go. And lastly, each group, we pick three to five questions that we know no one government or tech company can answer on their own around topics of counterterrorism and we form working groups. And this means that people apply to join a working group. They meet a few times in a year and we fund the development of outputs that create best practices, that evolve our own incident response, that evolve frameworks for understanding terrorist content. In the last couple of years, we’ve had things around our own hash sharing taxonomy as mentioned, but also things like red teaming around looking at the harms of AI generative content, but also blue teaming, looking at the positives around positive interventions and how this amazing new technology can help with intervention work, counter narratives, redirecting, getting at translation in language areas that a lot of moderators are blind to. And so there are risks and opportunities as we advance this conversation. With that, I would love to open it up to more questions. There’s so many rabbit holes, both technically, both philosophically, existentially, when we think of how to advance countering terrorism and violent extremism, but it is only through these multi-stakeholder collaborative efforts that we can really get at the 360 degree threat and opportunity and where to take the next steps. And with that, I yield back to Jennifer and UNC Ted. Thank you.

Jennifer Bramlette: Thank you very much, Erin. I really appreciate it. Every time I sit with you and with Adam, I learn something. We genuinely appreciate the time that you’ve taken to be here with us today. And I’d like to thank everybody who’s here in the room today as well. I know there are many other opportunities for things to do, and apparently at six o’clock, everything closes. So unfortunately, I will not be able to open this floor up for questions, but I think some of us will be willing to stand out in the hallway and chat out there to answer any questions that you may have. And in closing, I do wish to thank you for being here and choosing to spend the last hour of a very busy day with us. It was an honor to have all of the speakers here, and really, the final word on partnerships, it’s really through our partnerships and our collaborations, through leveraging our shared knowledge and our lessons learned and our good practices, that we shall be able to proactively overcome these challenges. And CTED will continue to pursue our work and assist with the work of our partners as we move forward with the IGF and counter more terrorism. Thank you very much for being here today.

J

Jennifer Bramlette

Speech speed

140 words per minute

Speech length

2409 words

Speech time

1028 seconds

CTED’s mandate to assess member states’ implementation of UN resolutions on counterterrorism

Explanation

CTED is mandated to conduct assessments of member states’ implementations of UN Security Council resolutions on counterterrorism. This work involves identifying good practices and gaps in implementation, and facilitating technical assistance.

Evidence

CTED identifies good practice and also gaps in implementation for which CTED works with partner organizations and states to facilitate technical assistance.

Major Discussion Point

Countering Terrorist Use of Information and Communication Technologies (ICT)

Member states’ varying technological capabilities and resources for counterterrorism

Explanation

Member states face different challenges in countering terrorist use of ICT due to varying technological capabilities and resources. Some states are technologically advanced, while others struggle with basic infrastructure.

Evidence

There are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations.

Major Discussion Point

Challenges in Addressing Terrorist Use of ICT

Need for updated counter-terrorism laws and regulatory frameworks

Explanation

There is a need to update counter-terrorism laws and regulatory frameworks to address crimes committed in online spaces or through cyber means. Many states lack laws to deal with crimes committed through or by artificial intelligence.

Evidence

Most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI?

Major Discussion Point

Challenges in Addressing Terrorist Use of ICT

Potential of AI and quantum technologies to exacerbate online harms and real-world damages

Explanation

Developments in artificial intelligence and quantum technologies have the potential to increase risks for online harms and real-world damages. However, these technologies can also be valuable tools for preventing and countering terrorism when used in accordance with international law.

Evidence

Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.

Major Discussion Point

Emerging Technologies and Their Impact on Counterterrorism

CTED’s inclusive approach involving member states, organizations, private sector, civil society, and academia

Explanation

CTED follows an inclusive approach that brings together various stakeholders to develop holistic, effective, and technologically advanced counterterrorism regimes. This multi-stakeholder approach is essential in the digital environment.

Evidence

CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.

Major Discussion Point

Importance of Multi-stakeholder Collaboration

Shortage of tech talent and cutting-edge equipment in government entities

Explanation

One of the biggest capacity gaps noted is a shortage of tech talent and cutting-edge equipment in government entities. This presents challenges in attracting and retaining tech talent in government positions.

Evidence

One of the biggest capacity gaps we note from our dialogue with member states is a shortage of tech talent and cutting-edge equipment in government entities. Issues of how to build that tech talent and then attract it into government positions and then retain it when the private sector and other avenues offer greater financial rewards are pressing questions, and there are no simple or inexpensive solutions.

Major Discussion Point

Challenges in Addressing Terrorist Use of ICT

P

Pedro Roque

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Parliamentary Assembly of the Mediterranean’s efforts to regulate AI and emerging technologies

Explanation

The Parliamentary Assembly of the Mediterranean (PAM) is committed to fostering dialogue, cooperation, and joint initiatives towards the regulation of AI and emerging technologies. PAM supports the efforts of the UN and the international community in this regard.

Evidence

PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.

Major Discussion Point

Countering Terrorist Use of Information and Communication Technologies (ICT)

Need for scientific understanding and impact assessments of AI and emerging technologies

Explanation

PAM parliaments are committed to promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments. This includes evaluating immediate and long-term risks and opportunities of these technologies.

Evidence

This includes promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments, as well as evaluating their immediate and long-term risks and opportunities.

Major Discussion Point

Emerging Technologies and Their Impact on Counterterrorism

Jurisdictional complexities in cyberspace and cross-border consensus building

Explanation

There are jurisdictional complexities in cyberspace, such as content that may be illegal in one country but not in neighboring countries. Many states are working together to build cross-border consensus and implement multilateral legal and operational frameworks to address these challenges.

Evidence

For example, gray area content could be illegal in one country, but not in the countries bordering it. And so like the examples outlined by Pam, many states are working together to build a cross-border consensus and to implement multilateral legal and operational frameworks to deal with these and many other ICT related challenges.

Major Discussion Point

Challenges in Addressing Terrorist Use of ICT

PAM’s collaboration with UN and other stakeholders to shape a safer digital world

Explanation

PAM is committed to collaborating with the United Nations, the Internet Governance Forum, Member States, and all stakeholders to shape a safer and more equitable digital world. This includes ongoing work on reports and initiatives related to AI and new technologies.

Evidence

PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world.

Major Discussion Point

Importance of Multi-stakeholder Collaboration

A

Arianna Lepore

Speech speed

141 words per minute

Speech length

1266 words

Speech time

537 seconds

UNODC’s Global Initiative on Handling Electronic Evidence to support criminal justice practitioners

Explanation

UNODC launched the Global Initiative on Handling Electronic Evidence to support law enforcement, prosecutors, judges, and other authorities in handling electronic evidence for criminal cases. The initiative includes the development of tools and guides to assist practitioners in this area.

Evidence

The Global Initiative on Handling Electronic Evidence was launched seven years ago. The purpose was exactly that. First of all, to foster public-private partnership, and it was thanks also to the efforts of CEDED and our efforts to work closely with the private sector that the initiative… is a fully-fledged project that has a holistic approach, so involves the private sector, involves the experts, involves the practitioners, the academia, and we developed different streams of work.

Major Discussion Point

Countering Terrorist Use of Information and Communication Technologies (ICT)

UNODC’s partnership with CTED and other organizations in developing tools and guides

Explanation

UNODC collaborates closely with CTED and other organizations in developing tools and guides for handling electronic evidence. This partnership ensures that the work of UNODC is informed by assessments and mandates from other UN bodies.

Evidence

The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism.

Major Discussion Point

Importance of Multi-stakeholder Collaboration

A

Adam Hadley

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Tech Against Terrorism’s mission to disrupt terrorist use of the internet through public-private partnerships

Explanation

Tech Against Terrorism aims to save lives by disrupting terrorist use of the internet. The organization was established as a public-private partnership to bridge the divide between the private sector and the public sector in countering terrorist use of ICT.

Evidence

Well, our mission is to save lives by disrupting the terrorist use of the internet, and we’re proud to have been established by UNCTED way back in 2017 as a public-private partnership focused on bridging the divide between the private sector and the public sector.

Major Discussion Point

Countering Terrorist Use of Information and Communication Technologies (ICT)

Increasing entrepreneurial and imaginative use of technologies by terrorists

Explanation

Terrorists are becoming increasingly entrepreneurial and imaginative in their use of technologies. They are adapting their techniques to evade automated responses and are proving difficult to dislodge from major platforms.

Evidence

What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques.

Major Discussion Point

Challenges in Addressing Terrorist Use of ICT

Opportunities for generative AI to improve accuracy and volume of content moderation decisions

Explanation

While there are risks associated with AI and generative AI, there are also significant opportunities to improve the accuracy and volume of content moderation decisions. This could help detect terrorist content at scale more accurately while upholding fundamental freedoms and human rights.

Evidence

I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.

Major Discussion Point

Emerging Technologies and Their Impact on Counterterrorism

Tech Against Terrorism’s work with governments and platforms to build capacity

Explanation

Tech Against Terrorism works with governments and platforms to build capacity in countering terrorist use of the internet. They provide various resources and services to support this effort, including threat intelligence, capacity building, and technology development.

Evidence

At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.

Major Discussion Point

Importance of Multi-stakeholder Collaboration

D

Dr. Erin Saltman

Speech speed

170 words per minute

Speech length

2065 words

Speech time

725 seconds

GIFCT’s cross-platform tech solutions and incident response framework for countering terrorist content online

Explanation

The Global Internet Forum to Counter Terrorism (GIFCT) provides cross-platform tech solutions and an incident response framework to counter terrorist content online. This includes a hash-sharing database and collaborative efforts to stop the viral spread of terrorist content during incidents.

Evidence

Once you become a member of GIF-CT, things around cross-platform tech solutions do include a scaled hash-sharing database where GIF-CT and our member companies can ingest hashed content of terrorist and violent extremist material when it fits our criteria.

Major Discussion Point

Countering Terrorist Use of Information and Communication Technologies (ICT)

Challenges and opportunities presented by AI-generated content in incident response efforts

Explanation

AI-generated content presents both challenges and opportunities in incident response efforts. There are concerns about fake incident response content, but also potential for AI to assist in verification processes and positive interventions.

Evidence

We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts

Major Discussion Point

Emerging Technologies and Their Impact on Counterterrorism

GIFCT’s multi-stakeholder governance and programmatic efforts

Explanation

GIFCT emphasizes multi-stakeholder governance and programmatic efforts in countering terrorist use of the internet. This includes collaboration with governments, civil society, and tech companies to share knowledge and develop best practices.

Evidence

We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.

Major Discussion Point

Importance of Multi-stakeholder Collaboration

Agreements

Agreement Points

Importance of multi-stakeholder collaboration

Jennifer Bramlette

Pedro Roque

Arianna Lepore

Adam Hadley

Dr. Erin Saltman

CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.

PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world.

The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism.

At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.

We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.

All speakers emphasized the critical importance of collaboration between various stakeholders, including governments, international organizations, private sector, civil society, and academia in addressing the challenges of terrorist use of ICT.

Challenges in addressing terrorist use of ICT

Jennifer Bramlette

Adam Hadley

There are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations.

What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques.

Both speakers highlighted the challenges in addressing terrorist use of ICT, including the varying technological capabilities of different states and the adaptability of terrorist groups in using new technologies.

Similar Viewpoints

All three speakers acknowledge both the potential risks and benefits of AI and emerging technologies in countering terrorist use of ICT. They emphasize the need for responsible use and development of these technologies to maximize their benefits while mitigating potential harms.

Jennifer Bramlette

Adam Hadley

Dr. Erin Saltman

Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.

I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.

We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts

Unexpected Consensus

Need for updated legal frameworks

Jennifer Bramlette

Pedro Roque

Most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI?

PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.

Despite coming from different perspectives (CTED and parliamentary assembly), both speakers strongly emphasized the urgent need for updated legal frameworks to address crimes committed through or by AI and emerging technologies. This unexpected consensus highlights the critical nature of this issue across different sectors.

Overall Assessment

Summary

The main areas of agreement among the speakers include the importance of multi-stakeholder collaboration, the challenges in addressing terrorist use of ICT, the dual nature of emerging technologies as both potential risks and tools for counterterrorism, and the need for updated legal frameworks.

Consensus level

There is a high level of consensus among the speakers on the core issues discussed. This strong agreement implies a shared understanding of the complex challenges in countering terrorist use of ICT and the need for collaborative, multi-faceted approaches. The consensus also suggests that future efforts in this area are likely to focus on strengthening partnerships, developing adaptive strategies to keep pace with technological advancements, and updating legal and regulatory frameworks to address emerging challenges.

Differences

Different Viewpoints

Approach to regulating AI and emerging technologies

Pedro Roque

Adam Hadley

PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.

I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.

While Pedro Roque emphasizes regulation of AI and emerging technologies, Adam Hadley focuses more on the potential benefits of AI for content moderation and detection of terrorist content.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to regulating and utilizing AI and emerging technologies in counterterrorism efforts.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of multi-stakeholder collaboration and the need to address the challenges posed by terrorist use of ICT. The differences mainly lie in the emphasis placed on various aspects of the issue, such as regulation, technological solutions, and human rights considerations. These differences do not significantly impede the overall goal of countering terrorist use of ICT, but rather highlight the complexity of the issue and the need for a comprehensive approach.

Partial Agreements

Partial Agreements

All speakers agree on the importance of multi-stakeholder collaboration in countering terrorist use of ICT, but they emphasize different aspects: CTED focuses on inclusive approach, Tech Against Terrorism highlights technological solutions, and GIFCT stresses the balance between counterterrorism efforts and human rights.

Jennifer Bramlette

Adam Hadley

Dr. Erin Saltman

CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.

At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.

We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.

Similar Viewpoints

All three speakers acknowledge both the potential risks and benefits of AI and emerging technologies in countering terrorist use of ICT. They emphasize the need for responsible use and development of these technologies to maximize their benefits while mitigating potential harms.

Jennifer Bramlette

Adam Hadley

Dr. Erin Saltman

Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.

I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.

We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts

Takeaways

Key Takeaways

Terrorist use of ICT and emerging technologies poses a growing threat that requires coordinated multi-stakeholder efforts to address

There is a need for updated laws, regulatory frameworks, and improved technological capabilities to counter terrorist use of ICT

Public-private partnerships and cross-sector collaboration are essential for effective counterterrorism efforts online

Emerging technologies like AI present both risks and opportunities for counterterrorism efforts

Balancing security measures with human rights and fundamental freedoms remains a key challenge

Resolutions and Action Items

CTED to develop non-binding guiding principles for member states on countering terrorist use of ICT

UNODC to expand its Practical Guide on Handling Electronic Evidence to include FinTech providers

Tech Against Terrorism to continue 24/7 capability to respond to major terrorist attacks

GIFCT to host regional workshops for knowledge exchange on local extremist trends

Unresolved Issues

How to effectively regulate terrorist-operated websites and domain names

Addressing jurisdictional complexities in cyberspace

Developing laws to deal with crimes committed through or by artificial intelligence

Balancing content moderation and free speech concerns

Verifying information during incident response in the age of AI-generated content

Suggested Compromises

Using both list-based and behavior-based approaches to identify terrorist content online

Balancing technological solutions with human input and context for content moderation

Considering both risks and opportunities of emerging technologies like AI in counterterrorism efforts

Thought Provoking Comments

Historically, the terrorist use of the internet has been seen as a tactical tool for recruitment and radicalization, but increasingly our concern is that the internet is becoming a strategic battleground for terrorists and hostile nation states, but mainly for terrorists.

speaker

Adam Hadley

reason

This comment introduces a paradigm shift in how we view terrorist use of the internet, framing it as a strategic rather than merely tactical tool. This perspective challenges existing assumptions and broadens the scope of the discussion.

impact

It set the tone for a more comprehensive examination of terrorist activities online, leading to discussions about infrastructure, strategic communications, and the need for a more holistic approach to countering terrorist use of the internet.

Can terrorists and should terrorists be allowed to run their own websites? Should ISIS or Al-Qaeda have the right to buy their own domain name? If not, what should we do about it?

speaker

Adam Hadley

reason

These questions highlight a critical gap in current internet governance and counterterrorism efforts. They force consideration of complex issues around freedom of speech, internet regulation, and the practical challenges of countering terrorist infrastructure online.

impact

This comment shifted the discussion towards the need for clearer international frameworks and jurisdictional agreements to address terrorist-operated websites, emphasizing a gap in current counterterrorism efforts.

We realized very quickly, and in consultation with human rights experts and civil society organizations, there is an Islamist-extremist bias in most lists in a post-9-11 framework, and we wanted to get at some of the neo-Nazi and white supremacy attacks that we know are taking place in different parts of the world.

speaker

Dr. Erin Saltman

reason

This insight reveals a critical bias in existing counterterrorism frameworks and demonstrates a commitment to a more comprehensive and equitable approach to identifying terrorist content.

impact

It led to a discussion about the evolution of GIFCT’s approach, including the development of behavior-based buckets for identifying terrorist content, showing how the field is adapting to address a wider range of extremist threats.

Even a standard agreed upon entity like Islamic State, if I were to have you surface an image and it’s a guy in the back of a Toyota, it’s really hard to know if that is foreign terrorist fighter imagery or if that is literally just a man in the back of a Toyota.

speaker

Dr. Erin Saltman

reason

This example vividly illustrates the complexities involved in content moderation and the limitations of purely technological solutions in identifying terrorist content.

impact

It underscored the need for human input and cross-sector knowledge sharing in counterterrorism efforts, leading to a discussion about the importance of local context and nuanced understanding in content moderation.

Overall Assessment

These key comments shaped the discussion by broadening the perspective on terrorist use of the internet from tactical to strategic, highlighting critical gaps in current approaches, addressing biases in existing frameworks, and emphasizing the complexities involved in identifying and moderating terrorist content. They collectively pushed the conversation towards more nuanced, comprehensive, and collaborative approaches to countering terrorist use of the internet, while also highlighting the ongoing challenges and the need for continued evolution in this field.

Follow-up Questions

How can we arrest a chatbot or prosecute an AI?

speaker

Jennifer Bramlette

explanation

This highlights the legal challenges in addressing crimes committed through or by artificial intelligence, which many states are unprepared for.

Are there any plans for UNODC or any other entity to build a model law for crimes committed through or by artificial intelligence?

speaker

Jennifer Bramlette

explanation

This suggests a need for international guidance on legislating AI-related crimes.

How can we address the jurisdictional complexities in cyberspace, particularly regarding content that may be illegal in one country but not in others?

speaker

Jennifer Bramlette

explanation

This highlights the need for international cooperation and standardization in addressing online terrorist content.

How can we improve data access for analyzing terrorist content on large platforms while respecting data privacy concerns?

speaker

Adam Hadley

explanation

This addresses the challenge of effectively monitoring large platforms for terrorist content while balancing privacy concerns.

Should terrorists and designated terrorist organizations be allowed to run their own websites or buy domain names? If not, what should be done about it?

speaker

Adam Hadley

explanation

This raises important questions about internet governance and the limits of online freedoms for designated terrorist groups.

How can we improve clarity about jurisdiction and standardization of responses regarding terrorist-operated websites?

speaker

Adam Hadley

explanation

This suggests a need for international cooperation in addressing terrorist use of internet infrastructure.

How can we quickly verify and share information to stop viral spread of misinformed or misleading incident content, particularly in the context of generative AI?

speaker

Dr. Erin Saltman

explanation

This addresses the challenge of combating misinformation during terrorist incidents, especially with the rise of AI-generated content.

How can we further develop and implement positive interventions using AI technology for counter-narratives, redirecting, and translation in areas where moderators are blind?

speaker

Dr. Erin Saltman

explanation

This explores the potential positive applications of AI in countering terrorism and violent extremism online.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #257 Emerging Norms for Digital Public Infrastructure

WS #257 Emerging Norms for Digital Public Infrastructure

Session at a Glance

Summary

This panel discussion focused on the concept of Digital Public Infrastructure (DPI) and its implications for global digital development. The panelists explored various definitions and applications of DPI, with a particular emphasis on its role in emerging economies. They discussed how DPI, such as digital identity systems and payment infrastructures, can promote financial inclusion and economic growth, citing examples from India and Brazil.

The conversation highlighted the tension between viewing DPI as a tool for digital sovereignty and concerns about potential fragmentation of the internet. Panelists debated the role of governments versus markets in developing DPI, with some arguing for a minimum viable infrastructure approach to protect free markets and democracy. The discussion also touched on the importance of interoperability, open standards, and multi-stakeholder cooperation in DPI development.

A key point of contention was the definition of DPI itself, with some panelists advocating for a narrower, value-driven definition to avoid confusion and potential misuse of the term. The panel explored the differences in DPI approaches between the Global North and South, with some arguing that DPI discussions are more prevalent in the Global South due to infrastructure gaps and a desire to address digital colonization.

The discussion concluded with reflections on the need for context-specific DPI solutions, the importance of transparency and accountability in DPI development, and the potential for DPI to address digital inequalities. Panelists emphasized the need for careful consideration of trade-offs and long-term impacts when implementing DPI initiatives.

Keypoints

Major discussion points:

– Defining digital public infrastructure (DPI) and its scope

– The role of government vs. private sector in developing DPI

– Concerns about digital sovereignty and fragmentation

– Challenges and opportunities for DPI in developing countries

– Interoperability and open standards for DPI

The overall purpose of the discussion was to explore the concept of digital public infrastructure, examining different perspectives on its definition, implementation, and implications across various national contexts. The panelists aimed to identify key issues and potential action items for the global community regarding DPI development.

The tone of the discussion was largely analytical and academic, with panelists offering nuanced views based on their experiences and research. There were moments of disagreement, particularly around the role of markets versus government in infrastructure development. The tone became more passionate when discussing issues of digital colonialism and inequality between the Global North and South. Overall, the conversation maintained a constructive and collaborative spirit despite differing viewpoints.

Speakers

– Milton Mueller: Director of the Internet Governance Project at Georgia Institute of Technology

– Luca Belli: Professor at FGV Law School, Director of the Center for Technology and Society at FGV

– Henri Verdier: Ambassador from the French Ministry for Europe and Foreign Affairs

– David MagÃ¥rd: Works for the Swedish Public Registration Office, coordinator of EWC (European Wallet Consortium)

– Jyoti Panday: Regional Director for the Internet Governance Project

– Anriette Esterhuysen: Former chair of the Multistakeholder Advisory Group

Additional speakers:

– Kashfi Inua: From Nigeria (audience member)

Full session report

Digital Public Infrastructure (DPI) Panel Discussion Summary

This panel discussion brought together experts from various fields to explore the concept of Digital Public Infrastructure (DPI) and its implications for global digital development. The conversation covered a wide range of topics, including definitions of DPI, its potential benefits and risks, implementation challenges, and the differing perspectives between the Global North and South.

Defining Digital Public Infrastructure

One of the central themes of the discussion was the lack of consensus on a clear definition of DPI. Several panelists offered their perspectives:

– Henri Verdier described DPI as the “minimum necessary infrastructure to protect free internet, market and democracy”.

– Luca Belli defined it as “digital systems built on open standards that are interoperable and secure to provide services”.

– Jyoti Panday characterized DPI as an “approach for building large-scale networks, platforms and services essential for digital economy”.

– David Magarde noted that DPI is not a commonly used term in the EU, where the focus is instead on interoperability.

The diversity of these definitions highlighted the need for a more precise and value-driven conceptualization of DPI. Anriette Esterhuysen, in particular, emphasized the importance of narrowing the definition to avoid confusion and potential misuse of the term.

Benefits and Risks of DPI

The panelists discussed both the potential advantages and drawbacks of DPI implementation:

Benefits:

– Breaking monopolies and increasing competition, as exemplified by the PIX payments system in Brazil

– Facilitating multi-stakeholder cooperation and intergovernmental collaboration

– Cost-effectiveness, as highlighted by Henri Verdier, who noted that DPI can be significantly cheaper than traditional infrastructure

Risks:

– Centralization and compromise of digital security

– Potential for control rather than public benefit, depending on the context

– Risk of fragmentation if pursued with a sovereignty-based approach

– Potential for creating new forms of monopolies or oligopolies

Luca Belli provided a detailed comparison between Brazil’s PIX system and India’s UPI, highlighting how these DPI initiatives have increased financial inclusion and competition in the payments sector.

Implementation and Governance

The panel explored various approaches to DPI implementation and governance:

– Luca Belli advocated for a bottom-up approach with stakeholder engagement based on local realities.

– David Magarde discussed the EU’s approach to digital identity wallets, emphasizing the need for regulation and openness in DPI development.

– Jyoti Panday highlighted the importance of institutional frameworks and oversight mechanisms.

– Henri Verdier emphasized that DPI should be developed through multi-stakeholder processes.

There was general agreement on the need for proper institutional frameworks, oversight mechanisms, and regulation to ensure openness and protect democratic values. The importance of open standards and interoperability in DPI development was stressed by multiple panelists.

The role of government versus market forces in DPI development was a key point of contention. Some panelists expressed concerns about DPI leading to fragmentation if pursued with a sovereignty-based approach, while others emphasized the need for government intervention to protect public interests.

Global Perspectives on DPI

The discussion revealed significant differences in how DPI is perceived and approached in different parts of the world:

– Anriette Esterhuysen noted that DPI is more frequently discussed in the Global South due to the lack of existing infrastructure.

– Luca Belli observed that the Global North is now studying Global South approaches to digital sovereignty.

– Henri Verdier pointed out the differences in DPI needs between the Global North and South.

– Milton Mueller highlighted institutional barriers to infrastructure development in some countries.

An audience member from Nigeria raised an important point about the need to consider “leapfrog regions” lacking infrastructure in the Global North as well, challenging the traditional North-South divide in digital development discussions.

Challenges and Future Considerations

The panel identified several key challenges and areas for future consideration:

1. Funding: The discussion touched on the potential role of taxation of big tech companies in funding DPI initiatives.

2. Interoperability: The need for open standards and cross-border compatibility was emphasized by several speakers.

3. Balancing sovereignty and global cooperation: The discussion highlighted the tension between national approaches to DPI and the need for international collaboration.

4. Transparency and accountability: Ensuring these principles in DPI development was seen as crucial for building trust and enabling proper oversight.

5. Digital divide: Addressing gaps in connectivity alongside DPI development was identified as a critical concern, with Luca Belli emphasizing the importance of meaningful connectivity for the success of DPI initiatives.

6. Role of AI: The potential impact of artificial intelligence on DPI was briefly mentioned at the beginning of the discussion.

7. Central banks and cryptocurrencies: The panel touched on the role of central banks in relation to cryptocurrencies and DPI.

Conclusion

The discussion concluded with reflections on the need for context-specific DPI solutions, the importance of transparency and accountability in DPI development, and the potential for DPI to address digital inequalities. Panelists emphasized the need for careful consideration of trade-offs and long-term impacts when implementing DPI initiatives.

While there was moderate consensus on the importance of DPI and the need for proper governance, significant differences remained in perspectives on its implementation, funding, and potential impacts. This suggests that further dialogue and research are needed to develop a more unified approach to DPI development and implementation globally.

The panel discussion successfully highlighted the complex, multifaceted nature of DPI, moving beyond technical definitions to consider historical context, geopolitical factors, governance implications, and potential pitfalls. It revealed DPI as a concept with significant implications for digital governance, economic development, and global power dynamics in the digital age.

Session Transcript

Milton Mueller: Well, I’m going to introduce the topic, and then I’m going to introduce the panelists. We had a last-minute cancellation, so we’re lacking perspective from India, but we’re talking today about digital public infrastructure. This has become a fashionable term. Somebody just told me that we should have called it AI digital public infrastructure because that’s even more fashionable, but in fact that is not a very scientific or accurate way of going about this. AI is a digital technology, and the infrastructure supporting digital services and applications, particularly payments, are something where we are bringing together new services in ways that raise policy issues regarding security, trust, competition in the digital economy, and the role of government in the private sector. So we are going to first examine what we mean by DPI. Is this just another buzzword? Is it something real? We’re going to talk about the institutional frameworks for collaboration between states and markets, oversight mechanisms, and the role of multi-stakeholder cooperation in fostering DPI production and governance. So what we’re going to do is spend about 30 minutes in this discussion, including our online audience and the local audience, fully aware of the fact that our online audience will be probably having better sound than we have here. So, let me introduce the panelists now. We have hovering above me on the screen is Ms. Jyoti Pandey. She’s a regional director for the Internet Governance Project. I guess I should introduce myself. I’m Milton Mueller. I’m with the Internet Governance Project, too. I’m the director, and we’re located at the Georgia Institute of Technology. Situated, going from my right to my left, is Luca Belli, and he’s from Brazil. I can’t remember the name of your institute. Yes, the well-known Center for Technology and Society, which goes by the acronym FGV, which nobody relates to technology. FGV is the acronym of the foundation. The foundation, right. From the South to the Vargas. So, it’s the Center for Technology and Society at FGV. And then we have Ambassador Henri Verdier, who is from the French Ministry for Europe and Foreign Affairs. Thanks for joining us. And we have online, we have David Magarde, who’s with the EU, there he is, Digital Identity Wallet Consortium, and the Open Wallet Forum at the ITU. Welcome, David. And last but not least, we have Henriette Esterhausen, a stalwart of civil society participation in these processes. And also, the chair, or the former chair, of the Multistakeholder Advisory Group. Are you still the chair? No, no. OK. So, let’s get started. Let’s begin with definitions and why you think this concept of digital public infrastructure has taken off and become such a buzzword. Let’s begin with Jyoti.

Jyoti Panday: Good morning, everyone. As Professor Muller introduced me, I’m Jyoti Pandey, I work with him at the Internet Governance Project and welcome to Emerging Norms for Digital Public Infrastructure. So the term digital public infrastructure or DPI, as it’s commonly referred to, is an approach or strategy for building large scale networks, platforms and services to mediate key processes or functionality that is essential for operating in the digital economy. So whether it’s digital identity, paying for transactions, it encompasses the underlying design, the institutional frameworks, resources that enable the development and use of these large scale system. DPIs, as we know, are transforming the global economy, they’re impacting business practices, they have altered relations between state, market and citizens. The emergence of DPIs like identity and authentication or interoperable payment systems have blurred the difference between public and private sector, traditional and new economies, tradable and non-tradable products and between goods and services. They’ve also created avenues for development of norms and standards for cybersecurity, privacy, data protection and competition. Approaches such as domestic and cross-border flows of data, intellectual property rights, consumer and data protection, digital security are significantly impacted by the emergence of DPIs and internet governance policies are also being shaped and advanced through DPIs. So as we know, this is a really important topic that everyone wants to weigh in on. But the fundamental issue is that even though we see forums like G20 and bilateral negotiations indicating that the adoption of DPIs are at a tipping point, they are being advanced globally, the lack of definition of consensus around what should be labelled as DPI and what doesn’t fall within that label is missing at the moment. moment. So our aim at this workshop is to have various folks who are part of stakeholder groups who are working and engaging with these processes to kind of shine some light and you know bring to our understanding to increase and expand our understanding of how they are approaching building DPI’s, how they are going about defining the values and infrastructure development and the design of these DPI’s. The advancement of DPI’s is happening even as legitimate concerns about their impact on trust, security and competition on in the digital economy remains unexplored or under addressed. There is a centralization of digital identity and online payments that is happening through under the DPI label and this can lead to policy problems like exclusion, fraud or even compromise of digital security. The rapid development and deployment also has profound implications in terms of disrupting traditional sectors and businesses. Another risk raised by DPI’s which are rooted in digital sovereignty or the claim that states have the stake in running the internet and digital services and should have the maximum say in how they are to be taken forward is that it could lead to fragmentation. So a sovereignty based approach to building and developing DPI’s could lead to fragmentation and we want to kind of explore this tension between fragmentation and cooperation that DPI’s enable. Control of these institutional arrangements and the technical architecture of DPI’s and its usage is creating discrete spaces of data and transactions which can encourage and enable governments to pursue a sovereignty based agenda. Delays in reform and rise in protectionism could hinder the adoption and expansion of DPI’s if embraced elsewhere. Given the stakes and the economic, political, social impact of DPI’s it is important to think through their development. create avenues for oversight. The prevailing discourse is exclusively focusing on legislative and regulatory measures, however interventions such as audits and assessments could also play a vital role. So like Professor Mueller kind of directed that you know we will delve into these other issues as we go by in this discussion, but on definition itself we do have certain global forums that are trying to create the platform and bring together diverse stakeholders to come towards a common definition, but again because of this sovereignty-based approach to developing DPIs the consensus has not been easy to come by and that’s what we are hoping to kind of delve into more over here. I’m happy to like jump in and talk more about it but I want to hear from my panelists. Thank you.

Milton Mueller: Thank you, Jyoti. So why don’t we go next to David. The ITU has played a classical role in basic telecommunications infrastructure coordination. I have never really built infrastructure. Okay, you have to turn it on. Somehow it got turned off. So yes, let’s turn to David and can you give us a brief overview, five minutes or so, of what your perspective is particularly focusing on the definitional issues at this stage?

David Magarde: I can give it a try. So thank you everyone for inviting me to this panel. I sit in a kind of cold Sweden and now I work for the Swedish Public Registration Office under the Ministry of Economic Affairs in Sweden. And also, I am the coordinator of EWC, the wallet consortia. We pilot digital identity wallets for Europe. This is a collaborative effort, 86 organizations from private and public sector, done over two years, 20 million euros, rather big. And I guess in some sense, a digital public infrastructure, although this is not really my area of expertise, so I’m coming with a kind of perspective from what I don’t understand of the public infrastructures and see if that can add to the discussion. When it comes to digital public infrastructures and what we see from, at least what I see, and I’ve been working with the Swedish government, within the Swedish government, for about ten years with digitalization, and of course in the EU as well, part of several expert groups and also the Open Wallet Forum, and some other OECD groups and so on, before. Digital public infrastructures kind of come in, in my view, in the last three years. I wouldn’t say that this is a big discussion in the EU. I haven’t really used it at all, to be honest, in the work that we’ve done. And it’s not something that is really looked at from my perspective when it comes to digital identity ecosystem in the EU. So I think this is interesting for the discussion, because then it seems like we have different kind of angles to it and different understandings. And also, of course, that makes it difficult to come with a kind of cohesive understanding of the public infrastructure and what we should do with it. From my perspective, also speaking about EU and also about Sweden, this is my personal understanding, but digital sovereignty is a core focus of the EU in our digital strategies. I wouldn’t say that it necessarily hinders cooperation with the private sector or other countries, but there is an understanding that we need to some degree have control of some of the key fundamental infrastructures when it comes to digitization, such as digital identity. Although we have a lot of good experience with public infrastructure.

Milton Mueller: We have a lot of ambient noise.

David Magarde: Should I stop?

Milton Mueller: You can go on, but we have a lot of ambient noise here.

David Magarde: Okay. And yes, to finish up, I think what’s mostly interesting, what I can assert is the interoperability between identity systems and interoperability of course on the technological infrastructure layers, but also of course the semantics, legal semantics and definitions and so on. So I’m curious on the interoperability questions in the digital public infrastructure framework, which as far as I’ve seen, it’s not that prominent in the discussion papers and so on that I’ve browsed through the last years.

Milton Mueller: Good. Thank you. I think it’s interesting then that from the perspective of, from David’s perspective, the term is not commonly used and the, whatever concern he has is in fact with interoperability, which as Jyoti flagged, may be an issue if you start taking a sovereigntist approach to DPI. So let’s go on to Ambassador Verdier. Let us, I think you are more involved with this topic.

Henri Verdier: Yeah, I am. So first, let’s say that I’m a veteran of the internet revolution. I started my first company in 1995 and then I became an ambassador. And I say this because that’s always the same thing. The reality is vibrant, evolving, diverse, complex, and we look for words, but the reality doesn’t obey to our words. I say this because DPI movement is one new world for various approaches. And you can connect it to infrastructures, to public service, to sometimes digital commons, to platform strategies. And for example, personally, I discovered probably this kind of ideas. 15 years ago, I was writing a book on platform strategies and I tried to explain, I was trying to explain the success and the strength of the big platforms like Google or Facebook or Apple itself. And suddenly I discovered that some people were speaking about government as a platform. And I was very interested by this approach. And when I became the French head of the French IT department for the government, I tried to develop some building blocks for a government as a platform strategy. And we did develop France Connect or some important APIs. And it was on a certain perspective, a kind of prototype of… what we call now DPI. And I’ve discovered at this time, the Estonian X-Road, for example, that is another kind of ancestor of the DPI approach. And then I discovered what was happening in India. So Pramod is not with us today, but we can here observe a massive impact. Because in less than three years, they did develop a digital ID for more than one billion people and 25% of them didn’t even have a legal existence. And then they had a legal existence plus an ID. Then they did develop a very smart payment interface. That is just, you know, just a set of API. They just decide that the banks has to be able to receive payment orders through this format. And they have a duty to agree to execute payments through this format. That’s all, plus independent body to regulate, to be sure that the banks are correctly implementing the system. And thanks to this approach, a good definition of a set of API, they did manage to let the market conceive 600 payment systems. So of course, Alipay and Google Pay are the biggest, but there is a huge diversity of payment system. You can pay with a QR code or with WhatsApp. It depends on your service. So I mentioned this to say that a lot of people are, a lot of countries or organizations are trying to build this kind of small layer of something, a kind of platformization between the free internet and the market and the society. And that’s very important because as I said, usually in the digital world, platform strategies are very efficient. A platform strategy is a strategy that compacts and distributes some resources for users. Most often they try to take the part of the added value of the user, so they take the platforms, the private platforms. They take data or they try to capture you or they ask for revenue sharing you. But that’s efficient. Very simple and obvious platform strategy is a smartphone itself. If you are a service developer, you are very grateful to Apple or Google to propose to you a connected computer with a camera and a connection and a lot of sensors and a lot of tools plus a good SDK. So you are very grateful and you agree to share 30% of your revenues because they did allow you to innovate and create and try to access to the market. So that’s efficient and that can be dangerous, that was said, because of course the platform is controlling everything. But to conclude this introduction, from my perspective, the most important reason to speak about this here in the IGF is that if we want and we want, if we want to protect a free, open, decentralized, neutral and unified Internet. And if we want to avoid the capture of this Internet by strong companies that are built on the Internet and that try to capture the customer. And if we want to remain free democracies, so we have a fundamental right, for example, we are in Europe, we are very attached to privacy. We have a fundamental right to say we want to protect privacy or we are not democracies anymore. So we have to impose some views to the companies that are. built on the internet. So if you want to manage all of this to protect the free and open and decentralized internet, to avoid the capture by strong actors, to build a democratic feedback, probably the design of this small layer of public services to be the interface between all those principles is a very, very good and efficient approach. But of course, and it will be said, I think, you can also have a public infrastructure that is dangerous. You can have a public infrastructure that is too much owned or controlled by the government itself, that is not transparent enough, that is, or that make mistakes with a security breach, or I don’t know. So we have to design it carefully in a multi-stakeholder way with enough transparency, if enough democratic feedback. But I don’t see any other approach than this layer of public service. If we don’t want the internet to become a kind of new far West. So that’s my view to launch the conversation.

Milton Mueller: Very good. So you have introduced something that I think is the best definition of DPI that I’ve heard, which is a set of APIs that is serving as an intermediary for a platform for many different, particularly as an interface between government services and the broader public. Let’s turn to Luca, who has, I think, some very strong perspectives on DPI based on your experience in Brazil.

Luca Belli: Not very strong. I would not say strong, but so just to provide a little bit of context. So I’m a professor at the FGV Law School, and besides directing the Center for Technology and Society at FGV, as Milton eloquently was announcing, I also direct a project called Cyber Bricks that maps and compares the digital policies of the bricks grouping. So over the past five years, we have. we discussed a lot of issues, and one of them is also amongst data governance and digital transformation AI governance, an overlapping issue is digital sovereignty. So we have a book that should already be available in open access on the Cambridge University Press websites on digital sovereignty in the BRICS. And some of the case studies we have analyzed from India or Brazil are precisely about DPIs and how they can be considered as an example of good digital sovereignty in some cases. Again, I also want to introduce a little bit of caution because DPI as digital sovereignty as pretty much anything we can speak in life, it may be a label. So to understand if it is good or bad, we have to understand what is the content behind the label, right? So DPIs as a definition, the only agreed international definition we have DPIs has been put forward by the G20 last year under the Indian presidency and has been taken off by the UN. For instance, if you look at the reports recent on DPI safeguards done by the DPI and UNDP, you will see that they quote the same definition of the G20 last year under the Indian presidency. And again, the Indians have been very successful because the DPIs for them is a key of their digital transformation strategy that endured more than 10 years under the India stack program. So let’s say that the cherry on top of their strategy was to put this into the G20. So everyone now is speaking about DPIs, which is a very successful strategy and very few countries have been this strategy and this agenda for G20 in my opinion. And so this definition is considering DPIs as digital systems that should be secure and interoperable. And they’re used to provide access to public or private services and deliver therefore not only… public services, although they are mainly used for public services, but if you look at India, there is a very good example of the ONDC, the Open Network for Digital Commerce, which is a protocol for a great bazaar where any private service can be delivered. It’s not only about public services. Usually, we categorize DPIs as digital identities, online payments, and personal data constant managers. These could fit into the public services, especially the first two, but they are not only public services. They can be used for private services. Another good example, also to stress that this does not only happen at the federal level, at the national level. In the city where I live, Rio de Janeiro, there is a very good competitor of Uber called TaxiRio. It’s a DPI made by the local municipality that allows you to take taxis without using Uber and is entirely developed by the local administration. So it’s a very good example of how DPIs can also be local. And also, why we speak about digital sovereignty not only as something that is state-driven, but can be driven by local communities, local municipalities, even local individuals. And why I argue that DPIs like PIX, our national digital payment infrastructure, is a good example of digital sovereignty because it allows, in this case, the individuals and the government to understand how the technology works, develop it, and regulate it effectively, which is how we define digital sovereignty in the book, in its various nuances. It’s all about understanding, developing, and regulating effectively technology. And so before PIX, our national payment infrastructure, was developed, the only exclusive way to process digital payments in Brazil was through Visa and MasterCard. The only way was using two foreign companies that not only charge extortionary fees between 3% and 5% in all the global South, but also they become… over the past 10 years, they became big data companies. Most of their revenues is not the 3% to 5% of the fee, it’s the intelligence and profiling. They elaborate on data collection of every single consumer that uses. People don’t understand it when they use their credit card. They are, in fact, not only paying with money, they are paying with data. And the only ones that were reaping these benefits were Visa and MasterCard before PIX in Brazil or UPI in India. So this is a very good example, in my opinion, of how BPI can be leveraged for good digital sovereignty because it broke the monopoly, actually the duopoly of Visa and MasterCard. The antitrust and competition authorities could have cried and adopted any kind of remedy, they would have never broken this monopoly, never. This has created an alternative that has reverted automatically 3% to 5% of what consumers pay in their pocket. So enormous advancement in terms of competition and consumer benefit. An enormous advancement in terms of informational self-determination or data protection. People now understand that their data are collected and who is collecting them because the online banking intermediaries, they have terms of service and precisely state what they do. When you use a credit card, you don’t even know that your data are collected. So that is an enormous advancement in terms of informational self-determination. And also, to conclude, this has also provided a very good example of how multi-stakeholder cooperation is not only about speaking in nice fora, but it also can be very outcome-oriented and very effective. The Brazilian Central Bank, which is one of the few institutions in Brazil that works very well, has leveraged a lot of public consultation with stakeholders to build PIX, is still working with all financial intermediaries to implement it, and created something called PIX Forum, which is not a nice event, but is a process to collect continuous feedback from those who are implementing the technology to understand how to improve it and what are the pitfalls of the technology. So I think that there are a lot of very good examples here. I would also provide, do I have a last minute? One more minute, yes, please. Meaningful connectivity, this has also explained the value of meaningful connectivity. In India, UPI, the Uniform Payment Infrastructure from which Pixy inspired, was a success only because India in 2016 adopted net neutrality rules that prohibited zero rating. So if you have meaningful connectivity, you can use DPIs. If you, as in all the global south, only access the internet through a meta family of apps, primarily WhatsApp, and in some cases, Facebook and Instagram, you cannot use, have access to DPIs because you pay for them. You pay for access to connectivity. So let me, the reason why India is experiencing a belly pop of innovation is because they prohibited zero rating in 2016, but that’s for the same reason, but it’s actually paradoxically the same reason why Pixy has been a success in Brazil, because when it was going to entry in force in 2020, one month before, WhatsApp was introducing WhatsApp Payment, and the Brazilian central bank know very well that if WhatsApp Payment during the pandemic had been introduced before Pix, no one here would be celebrating Pix as a success stories, but everyone in Brazil would be using WhatsApp Payment. So they suspended the entrance in force of WhatsApp Payment until Pix was also in force for consumer. Let me just explain that to be exact. In Brazil, only because the Brazilian central bank in 2020 understood that if they had allowed WhatsApp Payment to enter in force before Pix, Pix would have been absolutely useless. So before, I think this is a very cautionary tale for all governments willing to do DPIs, think systemically about everything, because if you spend a lot of money to do the best possible DPI, but at the end of the day, everyone only access WhatsApp and Facebook. it’s useless to put your investment in DPIs.

Milton Mueller: Okay, so now you understand why I said Luca had strong opinions about DPI. Let’s go on to Annelies.

Luca Belli: DPI is on many things. Just before, if I can add one word to make it even more simple. One day I was discussing with Ramod Varma, who was supposed to be there today, and he told me this very simple idea. He told me, you, in Europe, you did build your strength and prosperity, and maybe sovereignty, thanks to public infrastructure. Roads, trains, bridges. And you became very prosperous. And then suddenly you did stop. And today, in the current economy, you need ID, payment, geolocalization. Those are the infrastructures of the 21st century.

Annelies: Thanks, Sam. And Milton, do I also have strong feelings? Actually, I do. I think that DPI is a really important opportunity for us to fill gaps that date back. I mean, I see the ancestors of DPI, and I’m speaking very much from an African perspective. I’m in South Africa, but I work across the continent. Firstly, open government. One of those grand ideas and grand coalitions from about 20 years ago, which still exists, but it’s lost its glamour. It’s lost its attraction. And then the World Summit on the Information Society, where you had action lines on e-government, e-health, e-education, and enabling environment for innovation. All of those elements that were part of, and of course, connectivity infrastructure. And what we have as a result of that 20 years later is a landscape that’s completely characterized by digital inequality, which means that any initiative, be it a health initiative, or of an education initiative, or a government initiative, actually in most cases just increases that inequality. Because if there isn’t connectivity across the board, if people don’t have devices, if people in rural areas cannot afford the cost of mobile broadband, then any investment in any kind of digital public service, or in fact even in other services, commercial services, tends to just increase the gap between those who have and who can benefit and those who don’t. I think I’m not surprised that David hasn’t heard much about digital public infrastructure in Sweden, because you have digital public infrastructure in Sweden, you don’t need to talk about it. But we need to talk about it, and we need to talk about it very seriously. And I don’t think, Milton, it’s a set of APIs. It’s much, much more than that. And I think that the G20 definition, or looking at these four elements of broadband infrastructure, absolutely critical in Africa. I’m not sure how many people are aware that internet penetration in Africa is actually lower now than it was a few years ago. It’s now below 40 percent. It was around 40. It’s now more in the upper 30s. Even access to electricity is reducing in parts of Africa where dependency on hydro is affected by drought. And digital identity systems, again, even identity systems are a challenge in many African countries. And then, of course, the challenge of moving to digital identity when the data governance and the data protection frameworks and the public administration and rights protection frameworks are also insufficient in many cases. Another challenge, finance payments. How do Africans, most Africans, have financial inclusion or any semblance of it? It’s through Safaricom, it’s through mobile operators facilitating quite expensive financial services for the poor. Very few countries in Africa, outside of Nigeria and South Africa, for example, have banking. Banking that actually makes accessible, is accessible to the poor.

Milton Mueller: So let me just be clear here, Anrette, you are broadening the concept of infrastructure to everything, electricity, telecoms, banking, the whole shmear.

Annelies: Well, the G20 definition of digital public infrastructure looks at data, finance, broadband infrastructure and identity. And I think for all of those aspects to be able to be implemented and operate, operated and to become a platform for developing public services, you do need, you need not just the broadband, you also need the electricity infrastructure. Why I think it’s important and why DPI is such a good opportunity is because I think it, I mean, look, I talked a lot about how it can facilitate multi-stakeholder cooperation. That’s absolutely true. But I think even at a national level, it can facilitate intergovernmental collaboration. It can facilitate initiatives such as building, laying out broadband backbone, which is often done by the Ministry of Communications with the Ministry of Finance, Department of Finance is trying to deal with financial inclusion with home affairs. That’s trying to deal with digital identity systems. A country like South Africa, for example, pays social grants. In fact, the only reason people in South Africa don’t starve is because they get grants from the government. Administering that has been a massive challenge because there is no infrastructure to do that easily. It has been done in collaboration of the private sector, but it is extremely complex.

Milton Mueller: And I just think that I think I need to keep us a bit more focused here, because if you’re talking about the development of. telecommunications or broadband infrastructure, right? There is a completely different set of institutions and processes that have facilitated the growth of that. For example, all of these countries that had state-owned telecom monopolies had abysmal levels of penetration before liberalization and competition was introduced in the 90s or 2000s, right? You have to admit that there was public infrastructure and whether it’s power or especially telecoms, these countries that relied entirely on the state to be the developer or on international grants are simply never going to catch up with what was happening in the world where there was commercial development.

Annelies: What I’m saying is that DPI is an opportunity for countries where that gap exists to fill it. But with what capital?

Milton Mueller: With what investment? Where does the money come from?

Annelies: That is why it’s an important discussion and that’s why it’s important that we discuss it in the context of the IGF and WSIS because financing that investment and having private and public partnerships and having the kind of rights protections that need to be built into that is something that we can try and achieve through the WSIS, through the South African chairing of the G20. So I know that Jyoti wants to speak about it. The fact that it’s public infrastructure doesn’t mean that it has to be developed and owned and controlled by the state. It means that it has to be developed and controlled and regulated in the public interest. I’m not suggesting the statist approach.

Milton Mueller: Let me look at Jyoti again and then she’s been jumping up.

Jyoti Panday: These are really interesting perspectives, but I just want to, as the token representative of India and living through this great digital public infrastructure revolution, I feel like I have to chime in. A little bit going back into the history of how did the label DPI come about, and Riyad is very correct that the roots of this lie in the e-governance, the digitization initiatives that have kind of evolved with time and investment and stakeholder priorities. So it is running along the same trajectory, but for example, digital identity is rooted in the notion of national security, and therefore the state has way more say in running digital identity within the India stack than it has in matters of UPI, which has been developed by banks coming together to compete against MasterCard and Visa. So even within this whole label of India stack, which later got reformulated as DPI as they wanted to export it to other countries, the idea of digital identity is coming from a different perspective. Digital payments and wallets are coming from a different priority and perspective, and digital sovereignty seems to be the thread that ties these two different efforts and neatly brings them together under a common label, and therefore facilitates adoption, etc. I want to throw in a couple of questions here for everyone’s consideration. I like what Ambassador talked about, where he’s like, you know, your intention versus your experience, there can be a huge gap between that. So for example, when we talk about DPI and digital identity in India, one of the most common refrains I hear from everyone is that look at the scale, and look at how many people have digital identity, and these were people, large parts of the population were lacking any form of legal identity. Firstly, that claim is very, very contested, or there were various forms of identity available to the population. But of course, the state apparatus was backing this digital identity project. And therefore, one of the rationales to push it forward was to make the claim that people lack identity and therefore need a digital version of that identity to upgrade the population and bring them into the modern world into this revolution, right? But also I want to ask that after the digital identity got built, it got integrated into welfare not because this was strategic from day one, because the government and the players who were developing digital identity realized that the swiftest way of ensuring adoption and achieving that scale would be to integrate it with welfare schemes because the state has greater control over it and therefore would be able to push and apply its might behind the adoption of these services. So again, the intention, how has it reached that scale and is the scale the only parameter of success? And I think this is what Andrea is referring to when she talks about that we can’t only look at the scale and then reach and let’s talk about roads, like right, people, all countries have roads, but in some countries, the roads are excellent and talked about globally, but in other countries, they’re not that well developed. Some villages and towns in India still have dirt tracks that are referred to as roads, right? Who is responsible for building roads? It can happen both through public funding, it can happen through public-private cooperation, or private sector can be contracted to build roads. But it’s not just building roads once, roads require maintenance, roads will have to be upgraded as traffic piles up, different aspects need to be considered as the use of the road evolves. And I feel in a lot of these discussions about DPI, we get stuck at pointing to the success in India and not delving deep into that fine, no matter what the justification as to why this label has become really popular, but what are the impacts of DPI on the ground? What is the reality of the people who are actually engaging and interacting with these public infrastructure and how public are they? So, for example, the UPI API is owned by the consortium of Indian banks. And they did defer, they developed the API before, and they actually kind of didn’t allow WhatsApp to launch its payment or delayed, you know, WhatsApp adopting the protocol that was developed for interoperable payments in India. So if the idea of digital public infrastructure is that it is an alternative to big tech, monopolistic structures that are very prevalent in the digital ecosystem currently, are we replacing one monopoly with another kind of monopoly? And I would really want to hear more about these themes from our panelists.

Milton Mueller: Okay.

Annelies: I want to react to your tea, please, Milton. I agree 100% with you. And this workshop is about norms. And I think I’ve looked at quite a lot of DPI norms, the UNDP, safety guidelines, whatever they’re called. And I think none of them mention what you’ve just talked about is, how do you maintain these roads? How do you assess whether they’re getting to the people that need them most? And whether those people are actually able to find buses that they can ride on on those roads to get where they need to. I haven’t seen that level of practical looking at the sustainability and the impact of DPI initiatives. And in fact, I think if we don’t reframe how we talk about DPI, it’s just going to become another opportunity for vendor-driven pseudo-public investment, taken advantage of by corporations that are set up to do that, as has been the case with e-education, the WSIS Action Line.

AUDIENCE: I want to connect some dots. First, your former question. I think that’s of the utmost importance to understand what must be mutualized. And the difference between the, you said sometimes the public telecommunication did not reach out so much. The growing consensus within the community of DPI builders is that. we must find what has to be done by with this kind of public service approach and what must be led to the markets. So most of the DPI’s architects, they are looking for the minimum necessary infrastructure, if I may. And for example, in India, they don’t make, UPI doesn’t make the payment systems. They have more than 600 private payment systems. But built on a common infrastructure. So that’s much more efficient. Same for roads. So the importance is to have an interoperable road systems. I cannot change my car because I’m entering on another private road. So I need to be able to drive everywhere. If some of those roads are highway and I have to pay because this is a very expensive highway and others are communal small streets, it doesn’t matter. The importance is to have one unified system. And that’s the beauty of the DPI movement. We are looking for, yes, what must be done as a common and clear infrastructure for the market. Right. I think this is a well-known, although not frequently acknowledged, we tend to have this polarity, more government, more market. The fact of the matter is that markets function well when they have the right commons in which all the market players can interact. And maybe just one last thing. For most Europeans, when you say public services, you call lawyers too, because there is a long tradition of juridical definition of what must be a public service. A big service has to be neutral, equal access. You have the duty to maintain it. You cannot change your mind and say, oh, sorry, I stopped this project. There are very clear and strong definitions of public service. It can be done through private sector or not, it can be free or not, but there are some specific duties. And public service, at least in Europe, it might be public, it can be state-owned, but public servants are not governments and they have for example the right to contest the authorities because they obey to the definition of their public service.

Milton Mueller: In some countries, that is true. Luca, are you trying to get in here?

Luca Belli: Yeah, I wanted to first to attract everyone’s attention to a rare moment of agreement between me and Milton because I concur with Milton in the fact that I think it is better to keep separate the distinction between digital public infrastructure and classic infrastructure because when we speak about DPIs, it’s about digital systems built on open standards that are interoperable and secure to provide service. We might include into this telecom infrastructure, but I think it’s an overstretch. I would keep it to software rather than getting into hardware. Having said that, as I was mentioning at the very beginning of my remarks, I think one really has not only to think that DPI, although there is this nice internationally agreed definition, there are very different implementations of the concept and it’s essential to study the details. Of course, in this panel we can only have a superficial discussion about it, but let me provide some key examples between India and Brazil on the very same type of digital public infrastructure for payments to illustrate how this can be radically different. I think that the Brazilian experiment with PIX has been an evolution of what has been copied de facto from the UPI in India and actually maybe the Indians will not disclose that they have copied this from Russia because Russia introduced his system MIR after the invasion of Crimea in in 2014, when it was sanctioned immediately the day after by the US with the prohibition of Visa and MasterCard. So from night to day, Russians could not pay anything with their cards. So they had to come up with a new system, and a domestic one, MIR, like the space station, MIR. But it was a very old system based on cards, so a physical system. Now they have copied Brazil and India, and they also have a digital public infrastructure, a software one. But the origin were the Russians that developed this for an existential reasons. From night to day, you don’t have an online payment, so you have to come up with a solution. And the Indians were thinking about this, and were like, you know what? At the same time, they have disrupted Visa and MasterCard duopoly, so let’s do it. But the way in which the Indian did it, as Jyoti was mentioning, was to create a not-for-profit corporation, probably because they were skeptical about bureaucrats being able to do this. So they created the Reserve Bank of India, together with the main financial intermediary, created this National Payment Corporation of India. It’s like an icon for payments. And they did the digital public infrastructure. They created the UPI. But the reason why I argue that, actually, the Brazilian experiment went farther, because it, in my opinion, is much more transparent and accountable, is that the Brazilian experiment was directed by the Brazilian Central Bank. It’s a public institution. So if I file a request of access to information to the Brazilian Central Bank, and I ask them, tell me which data you have about me, tell me with whom you share them, tell me which are the cybersecurity standards that you adopted, they are obliged to reply to me. And if they don’t reply to me, I sue them. If I file this same kind of request for access to information to the National Payment Corporation of India, they reply to me, I’m sorry. We are not a public institution. We have no duty to reply to you. And we can keep everything totally opaque as much as we want, because this is not a public institution. So of course this could be changed in India if the National Payment Corporation of India became a public institution and so they would be obliged to be more accountable. Not in data protection terms because those who know the Digital Data Protection, Personal Data Protection Act of India know very well that all public institutions are exempted from respecting the data protection law of India. So even if it was a public institution would not be bound by respecting the protection law because data protection law in India has carved out an enormous exception for all public institutions. But that is the reason why I argue that we really should look at the details of these things because calling something digital public infrastructure doesn’t make it good or bad. We have now Bill Gates blogging about digital public infrastructure because of course this is a fashionable concept that’s been co-opted by Microsoft as usually happens with all nice concepts. Last year at IGF we were speaking about AI sovereignty. We released a book last year about this and two months after NVIDIA and Oracle started to blog about how their product were excellent for AI sovereignty which I think it’s a little bit of overstretch of the concept because of course they argue that you can be AI sovereign if you buy their tools which is again a little bit of overstretch of the concept. But again, we really have not only to look at the label but what is behind the label to understand if we can form our opinion and to understand if it’s good or bad.

Milton Mueller: Good, I need to get in here. So I really appreciated your story about the competition issue related to Visa and MasterCard in Brazil. I would tell you that within the US this is recognized as a competition issue but you need to understand how deeply embedded the dominance of those two payment networks is in our system of banking regulation. In other words, in some ways the regulatory system has created conditions which lead to an oligopoly. One of the concerns I have with this notion of sovereignty when it comes to DPI is that you could very easily recreate that situation in which local actors within a state essentially, if the DPI is not open and not standardized in a way that is facilitating competition, it can very easily lead to recreation of national oligopolies or national monopolies. One of the big competitors to a new form, emerging form of DPI, i.e. cryptocurrency, are of course the central banks. This is a very clear contradiction or clash between a sovereignty-based payment system and monetary system and a globalized one facilitated by the Internet. The other point I’d like to make here is that when we talk about interoperability and commons versus market, it seems that we’re not fully recognizing the revolutionary impact of the Internet itself. What is the Internet? It’s a set of non-proprietary protocols, right? Non-proprietary protocols that made all of this connectivity possible and it essentially facilitated a digitally networked economy, right? It wasn’t like the government said, we’re going to create a globally integrated economy and that’s why we’re funding this protocol. It was simply, the protocol was created through computer scientists and researchers and then it was, because it was non-proprietary, which is a good part about the fact that the government funded it, but it was non-proprietary so neither the US government nor any private actor controlled the use and development of the infrastructure around the Internet and one of the things we can even talk about digital payments now coming down to our phones is because of this. commons, this digital commons created by the internet protocols?

Henri Verdier: It’s real. So you’re right. Internet is based on open standards and we have to protect it. But we have also to recognize that the internet is captured because of through ID, payment and some things like this. So DPI could be an answer, but you’re right too. Maybe we should start thinking about a kind of ITU for DPI’s because interoperability, free exchange. I should be able to use my European digital wallet in India. That should be the goal. I should be able to travel, to exchange, coming from my DPI and negotiating with others.

Luca Belli: Just to complement your point because there is already an ITU for DPI which is the ITU because it has already released two years ago the GOVSTAC which is a set of open protocols for DPI’s. But what is the problem with the ITU? The ITU is an intergovernmental organization that sets norms. The reasoning behind the DPI is building the infrastructure. I honestly use this in class with students as an example of what Lessig called 25 years ago regulation by architecture because you are not regulating the market by imposing a sanction or by defining a norm. You are regulating the market by creating an infrastructure, an architecture that competes with what exists. And you’re perfectly right. I mean both of you are right. You are right because the internet is based on, it’s probably the only part of the internet that is public, is the protocols. Public, exactly. But you are right, but on that public infrastructure leading U.S. big tech has concentrated dominance and so the need for DPI is now is to revert. this concentration, it is evident, it’s quite blatant. And that is why I think that DPIs are, they could be seen, again, once again, word of caution, DPI is not always good or not always bad, but some of the examples could be seen as very good examples to reclaim digital sovereignty. So to reclaim the capability to understand how the technology functions, develop it and regulate it effectively, which is something that we have, pretty much everyone has lost over the past 20 decades because how the internet is now regulated is not through our nice laws. I say this with enormous frustration as a lawyer. The way in which the internet is regulated is not the law that I teach to school, at school, to students. I tell them, this is one vector of regulation. Reed Lessig, he was mentioning this 25 years ago, is one of the vector, is much more effective through you, through regulate, through infrastructure, what Lessig called architecture, but it de facto is what Susan Strange called structural power 30 years ago, or through marketing incentives, through subsidies and taxation. Why do we, everyone uses WhatsApp for all communications in the global South? Because it’s considered as free. It’s a marketing incentive. If WhatsApp had to be paid $100 per month, no one would use it in the global South. But the way we communicate is regulated by a subsidy that…

Milton Mueller: Like most of the infrastructure is actually created. I mean, the telecom infrastructure, the power infrastructure, that’s all. Let me get David in here, see if he has anything to say at this point. We’ve kind of left him in the dust.

David Magarde: No, no, thank you. Thank you for inviting me. So I have two things that I would like to just put on the table. So one may be interesting fact is from Sweden. So we have… We’re using BankID, which is an identity made by the corporation of the big banks in Sweden. It’s used by 99.7% of the Swedish adult population, so that is 18 to 67. So it’s of course very successful when it comes to usage and adoption. But now, the last three years, we have started work for a government-issued identity, because we’ve kind of seen that giving away all of the control of the digital identity also creates some issues. Those could be kind of on the kind of security, but also inclusion, and also the development actually of the infrastructures, because then we are from the public side and in the hands of BankID, which is sometimes good, because we have a good corporation, but sometimes it also stifles us a bit. So in my mind, that says that every situation is different, so it’s not possible to kind of have a framework that you can adopt to one country. It will differ. That’s very interesting. So I would like to hear more about this BankID offline. I also want to say the second part is that what we’re doing in EU now with the wallets is that we actually regulate open source, open standards for digital identity. And I think that the digital infrastructures, the public infrastructure, if you want to call them that, the trigger is really when you kind of make it cross-border, because then you need the interoperability, you need the kind of guardrails and the railroads and all of these things kind of working. And it’s really, really hard to do that even with standardization. We can see that with different standards and so on. But what we’re seeing now in the EU is the regulation of it. And it comes from an open source kind of idea. So I think that is very promising for making these things that you’re speaking about, being the kind of digital commons and the possibility for everyone to kind of have the same set of infrastructure and standards and so on. Yeah.

Milton Mueller: I think we’re supposed to end at 1045. So I’m going to ask now for all of you, starting with Jyoti, to provide some wrap up comments. And I want you to focus in particular on what action items you think the global community should take with respect to this issue. What can we do globally? And Amrita, I will not allow you to say that we should suddenly magically appear with $7 trillion to build broadband infrastructure everywhere. I’m going to have to ask you where that money comes from. But in terms of real feasible action items, what should we do next? Let’s start with Jyoti.

Jyoti Panday: So before I come into the action items, just briefly, one big takeaway for me from this conversation, and it’s also a question, is that which digital services qualify for the label of digital public infrastructure? Is social media DPI? Is digital identity DPI? What are the values that actually translate into something being labeled as DPI? Does it have to be open standard? Does it have to be public funded for the public part of the public infrastructure to hold? Also, I think another big point of confusion here is that the term infrastructure or architecture, as Luka helpfully referred to, has a certain notion that there is a common ground around. these ideas, right? What is happening in the case of India’s tax specifically, and maybe DPI more broadly, is that software and applications are being relabeled as infrastructure. And this is a very problematic pathway in my view, because, you know, as your digitization and your growth of your digital economy goals keep shifting, you can’t keep relabeling things that are agreed upon to suit your strategy and your convenience and to propel you into the global conversation on regulating the internet and digital services. It is a very effective strategy, but it causes confusion and it will lead to exactly this kind of fragmentation where everybody in various jurisdictions, stakeholders that have a stake or are behind certain projects, see the benefit of labeling their projects as DPI to attract more funding, to attract more mileage, to attract more attention. And we need to be wary of that. In terms of action points that we really need to focus on, I think, you know, unpacking these terms, are all digital services DPI? What constitutes, what values constitute as public of the DPI label? And what can applications and software become infrastructure? What are the infrastructural dimensions? So can you make a digital identity, the Indian digital identity, Aadhaar, mandatory for me to engage or access services on the internet? Right now in India, Aadhaar is used by the Supreme Court intervention has been restricted to services that have been paid for by the public fund of India, so our taxpayers money. But even though that decision is being completely flouted by stakeholders in both private sector and public sector in India, because they see the benefit of pushing digital identity adoption to achieve that scale. So I think drawing the guardrails and working closer to a more narrow. value-driven, precise definition is going to take this in a much more productive direction than the confusion that is currently in, you know, informing our discussions everywhere. Thank you.

Milton Mueller: Thank you, Jyoti. Let’s start with Anirudh and then just go around the circle, ending with David.

AUDIENCE: Thanks, Milton. I agree very much with Anirudh. I think digital infrastructure, my understanding, what I think we can do with digital public infrastructure is to look at it as an infrastructure that can be used to enable digitally powered public services and benefit. And I do think a value-based and narrower definition is a good way of looking at it. And, Milton, maybe those millions and millions of dollars can, and trillions can come from all that enhanced economic growth you’re going to get from your protectionist economic policies in the U.S. for years. But I absolutely am not going to take physical infrastructure off the table, because unless there is internet infrastructure, internet-enabled digital public infrastructure cannot exist. And I just, the other point I want to make is that DPI is not neutral. DPI and how it’s rolled out, how people experience it, will be shaped by the context. An authoritarian regime is going to use DPI to enhance control. And a revenue-focused, revenue-connection-focused regime is going to use it to enhance collection. And a public services, public benefit-oriented regime is going to use it to create more inclusion. And I think we cannot take that out of it. In other words, DPI is not neutral. It’s going to rise and fall in how much inclusion there is, how much oversight there is, and how much public engagement there is. But I do think it’s an opportunity to come back to looking at what we mean by internet and digital-enabled public services. To go back to all those very fragmented initiatives that emerged from open government, from WSIS, which are not connecting at all. And which are in most cases not actually creating more digital equality or digital inclusion. And I think we can use DPI to bring that conversation back to how do we collaborate? How do we integrate to achieve actual benefit for people?

Milton Mueller: Thank you, Andrea. A few ideas. First, regarding the cost, you did mention the cost. I think that Luca was right to say there is a difference between physical infrastructure and digital public infrastructure. The difference is that basically, first approximation, digital infrastructure are non-rival goods. You can have a lot of uses without scaling the cost. Of course, you have to pay the servers. I was impressed in India because basically the cost of Aadhaar is $1 per person. The cost of the French and most European digital IDs is $100 per person. So, that’s not so expensive. And the cost of UPI. So, UPI did allow more electronic transactions on US plus China together. And that’s basically a set of APIs plus a governance body. That’s such inexpensive. So, the question you did ask, what should be done now? First, like Luca too, I’m not sure we have a very precise, unified definition for everything. We have various national approaches. But probably what we should look for should be the minimum viable infrastructure that protects the free Internet, the free market. democracy. And just thinking like this, you can find some answers. And for example, you did ask, should social network be a DPI? I don’t think so. But I think that curation algorithm should, we should impose a market as a diversity of curation algorithms. We should not let 3 billion people access to information through one algorithm designed by Mark Zuckerberg, because that’s not democratic. So we should use API movements to impose a variety, to de-group, to impose a variety of algorithm. Thanks to one app store for algorithm, that social network should have to agree to accept. And we should start learning to think like this, what should be the minimum infrastructure to protect the free internet, to unleash a free market, and to protect democratic feedback. And when you start thinking like this, you find sometimes some, maybe not answers, but propositions of answers. Thank you, Ambassador. We’ll go to Loka next. And again, action items, if you can.

Luca Belli: So again, I think that there are drawing conclusions from what we have discussed. And from what I was trying to convey, there are multiple layers that we should consider in this quite complex situation. First, we should not think that DPI is bad or good, but analyze how it is proposed, and how I’m speaking about the how, how it emerges as a discussion, because I’m very concerned, for instance, that now, IMF is imposing digital identity as a counterpart for having loans, because this is an imposition, is not something that is emerging as a bottom up process from the country that wants to develop it and knows how to develop it, but is being imposed and likely adopting another type of technology that does not necessarily fit the specific reality. So you were speaking about Visa and MasterCard being part of the U.S. let’s say ecosystem that may work very well in the U.S., but we have also to acknowledge that, I don’t know, I’m not an expert in the U.S. I am needed and it is not a country I studied too much, but every country is very specific. So there is no, first, there is no silver bullet. So if we cannot impose, that is probably the reason why the ITU GovStack doesn’t work very well, because we cannot impose or hope that everyone will magically adopt a fantastic solution. It’s much better to bet on the fact that those who are ready and want to do this might be able to develop it. And here, my second point, institutions that drive this process are very important. The Brazilian example, I want to stress this, is very special example driven by an institution that, although it is in an emerging developing economy, it’s very well resourced, has very good command and control capabilities, and understands systemically how to do this. And they show that the PIX system is a success because they’ve understood all the layers of complexity, starting from the access, meaningful connectivity, or the lack of meaningful connectivity in Brazil. Because the reason why they postponed WhatsApp payment is because they are well aware that Brazil is not meaningfully connected. 22% of the population has meaningful connectivity, 78% of the population is connected only through WhatsApp and Facebook. So that is a very important point, because if we are investing a lot of money in a digital public infrastructure and everyone will only use Facebook and WhatsApp, we are wasting our money. So we have to think systemically. And last but not least, I think that this also gives a lot of hope to those who have been over, like some of us, for 15 or 20 years about speaking about multi-stakeholderism. It’s a very good example of how multi-stakeholderism… and can go beyond nice chats in exotic places and being very outcome-oriented and effective when it is well orchestrated.

Milton Mueller: So more bottom-up where it’s ready, no top-down.

Luca Belli: Understanding the local realities and engaging the stakeholder in the most effective way.

Milton Mueller: And good, but recognizing that when things evolve in that way, they tend to be, could be fragmented, right?

Luca Belli: Yes, but in this, they could be fragmented, but if we cannot think that to fight against fragmentation, we impose concentration and accumulation of wealth by only three or four enormous tech companies that actually are not even taxed as they would be. So if you want to find the money to buy all this, start to tax Google, Facebook, Amazon, that have reaped enormous benefits since the pandemic, but have never been taxed properly. I mean, there are enormous reports about only Brazil, India, and Indonesia lose $4 billion per year in tax evasion. So if you want to find the money, let me tell you, my friend, you can find it.

Milton Mueller: Four billion might finance the broadband in what, two cities, but anyway. But David-

Luca Belli: I’m not speaking about broadband, I’m speaking about software, much cheaper, as Henry was saying.

David Magarde: Yeah, sure. I think a bottom-up approach will give the, I mean, there are already key players here that kind of control most of the environment, so they will effectively control more of the environment if it’s only bottom-up. So I think regulation needs to be there, some institutions, but also with, I know that ITU libraries and so on may not be fully used as was hoped by some, but I really do like the idea that we have libraries with open standards. source code and so on that can be used because that’s also drives innovation and ensures that the civil society but also the private companies can look into what’s happening at the public sector side so that the government can don’t take control of all of the relevant infrastructures because I think to develop an inclusive digital infrastructure needs to be cooperative between the public private sector but also needs to be transparent and ensure that everyone can kind of everyone with skills can go in there and look what’s actually happening and also I was surprised that India have excluded their public sector from the data protection side because that is one thing that I think is working quite well in EU when it comes to ensuring kind of transparency and openness on what of the data which is of course what’s the most core to the people their data or data about them. So I would say regulation and kind of openness and providing like tech stacks for everyone to use and then we will have some players or some areas or nations that will go before the other ones and hopefully they will do a good job so that the other ones can kind of jump into that train on that train and then we can see kind of interoperable basic infrastructures that people can build up using their kind of natural context to that and that in itself I hope would enable people to travel more freely both digital and physical using their kind of uncontrolled digital identity which is the key enabler for most of these things.

Milton Mueller: Okay so I just realized that I’ve totally excluded the audience from the discussion it’s because so much vigorous stuff was going on here but it was hard to to get in, and I’m sure we’d be talking for another hour if I didn’t rein them all in. So, I would encourage you that if you’re here to approach the speakers, take off your headphones and talk to people about some of their ideas. I think we have about five minutes. Two minutes. We have eight minutes? Seven. Seven minutes. Let’s unleash the audience. All right. Here, we’ve got somebody. You’ll have to give him a microphone.

Kashfi Inua: Thank you very much. My name is Kashfi Inua from Nigeria. Quite insightful and interesting conversation. But my question is, why is it it’s only in the global south we talk about DPI? In the global north, nobody talks about DPI. And looking at how they evolve in terms of building the infrastructure, I am slightly lost to what is happening in the enterprises. By having a backbone. And through that backbone, they are connecting everyone. So, sometimes I just get confused. Let’s talk about DPI, global south. Thank you.

AUDIENCE: So, first, I don’t think there is such thing as a global south. There is a huge diversity in the south and in the north. But I work on this question for a long time. Your question, because the truth is, as it was said, we have most ancient and complex systems that is working a bit like a DPI. For example, in France, you can buy a baguette with your credit card without paying any fee to anyone. So, you don’t really need UPI because you have this system. But our system is very ancient. with a strong legacy, very expansive, but the need to change is not the same. But as I said, you have X-Road in Estonia, you have France Connect in France, you have pieces of DPI, but they were designed before the DPI movement.

Milton Mueller: Luca?

Luca Belli: Just a quick comment on this, because I think that the Global South has realized that there is a problem. We have been digitally colonized, and I think that the only thing that is in common in the Global South, or the global majority, they are extraordinarily diverse countries, but the only thing they have in common, they have been colonies. So they understand that they are colonized again. The Global North is used to be the colonizer, and they are not being used to be colonized, now they are realizing it now. Let me tell you, I was invited, I was at the European Parliament in September to present the research on digital sovereignty in the BRICS, because now the Global North is studying the Global South, because there are a lot of things to be learned from India, Brazil, China, South Africa, in terms of how to react to this kind of digital colonialism that is, of course, ongoing, and from which very few countries, I would say only one, is benefiting, and the others are not benefiting at all. So the others have a problem and need to find a solution.

Milton Mueller: I would want to challenge-

Annelies: Simply, you are right. We speak about it in the Global South because we don’t have infrastructure. You don’t speak about it as much in the Global North because you have infrastructure. You know what it feels like to go to the Netherlands and they complain that their trains are late, when in South Africa, getting a bus is impossible. It’s a vast difference, and I think that the sad thing is that these conversations replicate. During the structural adjustment era, when we were trying to invest in public education and public health, we were told by the international financial institutions and the Global North, don’t, no, don’t, let the market do it. When we talked about building broadband capacity, we’re told by Milton and other people, let the market do it. The market fails to do that. I’m not saying the market is to blame for that. I think there are a whole set of complex factors. But so that’s why we keep having these conversations. And I think the challenge is, are our governments having them in the right way, in the most effective way? Are financial institutions responding in the most effective way? But I think that’s very clear why we have this conversations. I think we just need to use these conversations better, be more action oriented and more context oriented in how we actually try and redress these gaps in infrastructure that we are talking about.

Milton Mueller: I don’t think any of you actually answered his question, which was like, I mean, partly you did. You’re saying, yeah, we don’t have infrastructure. None of you have dealt with the question of why they don’t. Now, part of it is the legacy of colonialism. And by that, I mean real colonialism, not this fake thing called digital colonialism in which you’re simply talking about the advancement of technological capabilities and their extension from the leading countries. But a lot of it is also the fact that your institutions have not allowed for the market development that is possible in other countries, right? You have telecom monopolies still in many African countries still stayed on telecom monopolies. You have, you know, it’s illegal to have voice over IP in many of these countries for a very long time. All the forms of competition and innovation that were enabled by digital technology have frequently been stifled by extremely authoritarian or protectionist governments that will not allow the infrastructure to develop.

AUDIENCE: I just like to jump in here. I think Yoti has something to say, but I was also gonna direct my next question to you. And it builds off of this, if you’d like to just take that. I think what he is trying to say and what you were trying to say about the global South differences, I think what we actually have to acknowledge and what we publish in our own research is that these are places called leapfrog regions and that they exist in the global North. You can look at Flint, Michigan. You can look at different parts of Atlanta, New York. there are regions that when COVID hit, we saw that there was not digital public infrastructure in the North as well. And so we need to acknowledge that digital public infrastructure does go across both sides. And so my question would be, I think, when you talk about authoritarian governments and monopolistic telecoms companies, I think what we feel in the Global South is that those companies and that structure is not structured by our governments, it’s structured by outside forces. And so what can we do as parties in the Global South, like I come from Northern Ethiopia, he comes from Nigeria, I’m not sure where you come from, but I think what I’m curious about to hear, and Jyoti, I’d love to hear you start off, is how do the youth from these places, how do we decide to, like, we’d like to take control of our own infrastructure, and how do we leapfrog past what we see in front of us, because it’s highly, highly unfair.

Jyoti Panday: Yeah, thank you for these amazing questions from the audience. And I have a very skeptical view of this. To the first gentleman’s question as to why we hear this label more from Global South and not so much in Global North, is because the label has actually been developed in the Global South. And, you know, these advanced nations with their advanced technological capabilities are, you know, very of adopting terms that they have not been involved in directly, right from the outset, been dictating terms of their development. So it’s like, it’s a little bit coming from geopolitical competition around tech policy, but also, you know, people are wanting to see how this label evolves, what is the impact, and then when they see it is a great success. So to Luca’s point, why Gates Foundation or the IMF are now suddenly using these labels, once they become, once they pick up, once they become popular, and they see the benefits of these labels, they will start using it more. In terms of how do we leapfrog, I want to give an example from India on broadband, actually.

Milton Mueller: Jyoti, we’re about to get kicked out of the room. It’s 11 o’clock.

Jyoti Panday: So Reliance brought the data rates completely low in India, and everybody clapped and said this is wonderful. But now, 10 years down the lane, we see that the investment in actual internet infrastructure has dwindled down because we have, there’s a monopoly or the, you know, there are only two telecom operators in India. So we have to take these decisions carefully. There are always trade-offs involved. And it’s, I can, I’m happy to discuss this offline. I don’t want to hold anyone in the room. But thank you again for the questions and for all your inputs and for making the time to all the panellists and the speakers. Thank you. Thank you again very much for being a part of this.

Milton Mueller: Thank you.

H

Henri Verdier

Speech speed

139 words per minute

Speech length

1029 words

Speech time

441 seconds

DPI as minimum necessary infrastructure to protect free internet, market and democracy

Explanation

Henri Verdier argues that DPI should be the minimum viable infrastructure needed to protect the free internet, free market, and democracy. This approach aims to find the essential elements required for these goals without overreaching.

Evidence

He suggests that curation algorithms should be diverse and not controlled by a single entity like Mark Zuckerberg, as this is not democratic.

Major Discussion Point

Definition and Scope of Digital Public Infrastructure (DPI)

Agreed with

Jyoti Panday

David Magarde

Agreed on

Importance of institutional frameworks and governance

Differed with

Milton Mueller

Luca Belli

Differed on

Role of government and market in DPI development

L

Luca Belli

Speech speed

168 words per minute

Speech length

3591 words

Speech time

1276 seconds

DPI as digital systems built on open standards that are interoperable and secure to provide services

Explanation

Luca Belli defines DPI as digital systems based on open standards that are interoperable and secure for providing services. He emphasizes the importance of software rather than hardware in this definition.

Evidence

He contrasts this with traditional infrastructure like telecommunications, suggesting DPI should focus more on software aspects.

Major Discussion Point

Definition and Scope of Digital Public Infrastructure (DPI)

Agreed with

Jyoti Panday

Annelies

Agreed on

Need for a clearer definition of DPI

Differed with

Henri Verdier

Jyoti Panday

David Magarde

Differed on

Definition and scope of Digital Public Infrastructure (DPI)

DPI can break monopolies and increase competition, as with payments in Brazil

Explanation

Belli argues that DPI can disrupt monopolies and foster competition. He uses the example of Brazil’s payment system PIX to illustrate how DPI can challenge established financial monopolies.

Evidence

He describes how PIX broke the duopoly of Visa and MasterCard in Brazil, reducing transaction fees and improving data protection for consumers.

Major Discussion Point

Benefits and Risks of DPI

Bottom-up approach needed, with stakeholder engagement based on local realities

Explanation

Belli advocates for a bottom-up approach in developing DPI, emphasizing the importance of understanding local contexts and engaging stakeholders effectively. He argues against imposing solutions from the top down.

Evidence

He cites the Brazilian example of PIX, where the central bank understood the local reality of limited meaningful connectivity and adjusted their strategy accordingly.

Major Discussion Point

Implementation and Governance of DPI

Differed with

Milton Mueller

Henri Verdier

Differed on

Role of government and market in DPI development

Global North studying Global South approaches to digital sovereignty

Explanation

Belli points out that the Global North is now studying approaches to digital sovereignty developed in the Global South. This represents a shift in the traditional flow of knowledge and policy ideas.

Evidence

He mentions being invited to the European Parliament to present research on digital sovereignty in the BRICS countries.

Major Discussion Point

Global Perspectives on DPI

J

Jyoti Panday

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

DPI as approach for building large-scale networks, platforms and services essential for digital economy

Explanation

Jyoti Panday defines DPI as an approach to building large-scale networks, platforms, and services that are essential for operating in the digital economy. This includes elements like digital identity and payment systems.

Evidence

She mentions examples such as identity and authentication systems and interoperable payment systems as part of DPI.

Major Discussion Point

Definition and Scope of Digital Public Infrastructure (DPI)

Agreed with

Luca Belli

Annelies

Agreed on

Need for a clearer definition of DPI

Differed with

Henri Verdier

Luca Belli

David Magarde

Differed on

Definition and scope of Digital Public Infrastructure (DPI)

DPI can lead to centralization and compromise digital security

Explanation

Panday warns that DPI can potentially lead to centralization of digital identity and online payments, which may compromise digital security. She also notes that rapid development of DPI can disrupt traditional sectors and businesses.

Evidence

She mentions risks such as exclusion, fraud, and compromise of digital security as potential consequences of centralized DPI.

Major Discussion Point

Benefits and Risks of DPI

Importance of institutional frameworks and oversight mechanisms

Explanation

Panday emphasizes the need for proper institutional frameworks and oversight mechanisms in the development and implementation of DPI. She suggests that interventions such as audits and assessments could play a vital role.

Evidence

She notes that the current discourse focuses mainly on legislative and regulatory measures, but other forms of oversight are also important.

Major Discussion Point

Implementation and Governance of DPI

Agreed with

David Magarde

Henri Verdier

Agreed on

Importance of institutional frameworks and governance

D

David Magarde

Speech speed

139 words per minute

Speech length

1210 words

Speech time

521 seconds

DPI not commonly used term in EU, focus instead on interoperability

Explanation

David Magarde notes that the term DPI is not commonly used in the EU context. Instead, the focus is more on interoperability between identity systems and technological infrastructure layers.

Evidence

He mentions his work with the EU Digital Identity Wallet Consortium, which focuses on interoperability rather than explicitly using the term DPI.

Major Discussion Point

Definition and Scope of Digital Public Infrastructure (DPI)

Differed with

Henri Verdier

Luca Belli

Jyoti Panday

Differed on

Definition and scope of Digital Public Infrastructure (DPI)

Need for regulation and openness in DPI development

Explanation

Magarde argues for the need for regulation and openness in the development of DPI. He suggests that while bottom-up approaches are important, some level of regulation is necessary to prevent control by existing key players.

Evidence

He advocates for libraries with open standards and source code to drive innovation and ensure transparency in the public sector’s development of infrastructure.

Major Discussion Point

Implementation and Governance of DPI

Agreed with

Jyoti Panday

Henri Verdier

Agreed on

Importance of institutional frameworks and governance

A

Annelies

Speech speed

154 words per minute

Speech length

1233 words

Speech time

479 seconds

Need for narrower, value-driven definition of DPI

Explanation

Annelies argues for a more focused, value-driven definition of DPI. She suggests looking at DPI as infrastructure that enables digitally powered public services and benefits.

Major Discussion Point

Definition and Scope of Digital Public Infrastructure (DPI)

Agreed with

Jyoti Panday

Luca Belli

Agreed on

Need for a clearer definition of DPI

DPI is not neutral and can be used for control or public benefit depending on context

Explanation

Annelies emphasizes that DPI is not neutral and its implementation and effects depend on the context. She argues that the way DPI is used can vary based on the regime’s goals and values.

Evidence

She provides examples of how different types of regimes (authoritarian, revenue-focused, public service-oriented) might use DPI for different purposes.

Major Discussion Point

Benefits and Risks of DPI

DPI more discussed in Global South due to lack of existing infrastructure

Explanation

Annelies explains that DPI is more frequently discussed in the Global South because of the lack of existing infrastructure. She contrasts this with the Global North, where infrastructure is already in place.

Evidence

She uses the analogy of complaints about late trains in the Netherlands versus the impossibility of getting a bus in South Africa to illustrate the infrastructure gap.

Major Discussion Point

Global Perspectives on DPI

M

Milton Mueller

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Risk of DPI leading to fragmentation if pursued with sovereignty-based approach

Explanation

Milton Mueller raises concerns about the potential for DPI to lead to fragmentation if pursued with a sovereignty-based approach. He suggests that this could recreate national oligopolies or monopolies if not implemented carefully.

Evidence

He uses the example of cryptocurrency as a potential competitor to sovereignty-based payment systems.

Major Discussion Point

Benefits and Risks of DPI

Differed with

Luca Belli

Henri Verdier

Differed on

Role of government and market in DPI development

Question of where funding will come from for DPI in developing countries

Explanation

Mueller raises the question of how DPI will be funded in developing countries. He challenges the idea that vast sums of money can suddenly appear for infrastructure development.

Major Discussion Point

Implementation and Governance of DPI

Institutional barriers to infrastructure development in some countries

Explanation

Mueller argues that institutional barriers in some countries have hindered market development and infrastructure growth. He suggests that protectionist policies and authoritarian governments have often stifled innovation and competition.

Evidence

He mentions examples such as telecom monopolies and restrictions on Voice over IP in some African countries.

Major Discussion Point

Global Perspectives on DPI

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need to consider “leapfrog regions” lacking infrastructure in Global North as well

Explanation

An audience member points out that there are ‘leapfrog regions’ lacking digital infrastructure in the Global North as well. This challenges the simple North-South divide in discussions about DPI.

Evidence

The speaker mentions examples like Flint, Michigan and parts of Atlanta and New York where lack of digital infrastructure became apparent during the COVID-19 pandemic.

Major Discussion Point

Global Perspectives on DPI

Agreements

Agreement Points

Need for a clearer definition of DPI

Jyoti Panday

Luca Belli

Annelies

DPI as approach for building large-scale networks, platforms and services essential for digital economy

DPI as digital systems built on open standards that are interoperable and secure to provide services

Need for narrower, value-driven definition of DPI

The speakers agree that there is a need for a more precise and value-driven definition of DPI, focusing on its role in building essential digital infrastructure and services.

Importance of institutional frameworks and governance

Jyoti Panday

David Magarde

Henri Verdier

Importance of institutional frameworks and oversight mechanisms

Need for regulation and openness in DPI development

DPI as minimum necessary infrastructure to protect free internet, market and democracy

The speakers emphasize the need for proper institutional frameworks, oversight mechanisms, and regulation in the development and implementation of DPI to ensure openness and protect democratic values.

Similar Viewpoints

Both speakers view DPI as a tool to promote competition and protect democratic values in the digital economy.

Luca Belli

Henri Verdier

DPI can break monopolies and increase competition, as with payments in Brazil

DPI as minimum necessary infrastructure to protect free internet, market and democracy

Both speakers highlight the potential risks of DPI, emphasizing that its implementation can lead to centralization or be used for control depending on the context.

Jyoti Panday

Annelies

DPI can lead to centralization and compromise digital security

DPI is not neutral and can be used for control or public benefit depending on context

Unexpected Consensus

Global South leading in DPI development and discourse

Luca Belli

Jyoti Panday

Annelies

Global North studying Global South approaches to digital sovereignty

DPI more discussed in Global South due to lack of existing infrastructure

There is an unexpected consensus that the Global South is leading in DPI development and discourse, with the Global North now studying and learning from these approaches. This represents a shift in the traditional flow of knowledge and policy ideas.

Overall Assessment

Summary

The main areas of agreement include the need for a clearer definition of DPI, the importance of institutional frameworks and governance, and recognition of both the potential benefits and risks of DPI implementation. There is also consensus on the growing importance of Global South approaches to DPI.

Consensus level

Moderate consensus with some diverging views. While there is agreement on the importance of DPI and the need for proper governance, there are differing perspectives on its implementation, funding, and potential impacts. This suggests that further dialogue and research are needed to develop a more unified approach to DPI development and implementation globally.

Differences

Different Viewpoints

Definition and scope of Digital Public Infrastructure (DPI)

Henri Verdier

Luca Belli

Jyoti Panday

David Magarde

DPI as minimum necessary infrastructure to protect free internet, market and democracy

DPI as digital systems built on open standards that are interoperable and secure to provide services

DPI as approach for building large-scale networks, platforms and services essential for digital economy

DPI not commonly used term in EU, focus instead on interoperability

Speakers had different perspectives on what constitutes DPI, ranging from a minimal infrastructure approach to a broader definition encompassing various digital systems and services.

Role of government and market in DPI development

Milton Mueller

Luca Belli

Henri Verdier

Risk of DPI leading to fragmentation if pursued with sovereignty-based approach

Bottom-up approach needed, with stakeholder engagement based on local realities

DPI as minimum necessary infrastructure to protect free internet, market and democracy

Speakers disagreed on the extent of government involvement in DPI development, with some advocating for a more market-driven approach and others emphasizing the need for government intervention.

Unexpected Differences

Global North-South divide in DPI discussions

Luca Belli

Milton Mueller

Unknown speaker

Global North studying Global South approaches to digital sovereignty

Institutional barriers to infrastructure development in some countries

Need to consider “leapfrog regions” lacking infrastructure in Global North as well

While most speakers focused on the Global South’s need for DPI, an unexpected perspective emerged highlighting similar infrastructure gaps in parts of the Global North, challenging the traditional North-South divide in digital development discussions.

Overall Assessment

summary

The main areas of disagreement centered around the definition and scope of DPI, the role of government versus market forces in its development, and the global perspective on DPI needs.

difference_level

The level of disagreement was moderate to high, with significant implications for how DPI might be conceptualized, implemented, and governed globally. These differences suggest that achieving a unified approach to DPI development and implementation may be challenging, potentially leading to varied strategies across different regions or countries.

Partial Agreements

Partial Agreements

These speakers agreed on the need for stakeholder engagement and oversight in DPI development, but differed on the balance between bottom-up approaches and regulation.

Luca Belli

David Magarde

Jyoti Panday

Bottom-up approach needed, with stakeholder engagement based on local realities

Need for regulation and openness in DPI development

Importance of institutional frameworks and oversight mechanisms

Similar Viewpoints

Both speakers view DPI as a tool to promote competition and protect democratic values in the digital economy.

Luca Belli

Henri Verdier

DPI can break monopolies and increase competition, as with payments in Brazil

DPI as minimum necessary infrastructure to protect free internet, market and democracy

Both speakers highlight the potential risks of DPI, emphasizing that its implementation can lead to centralization or be used for control depending on the context.

Jyoti Panday

Annelies

DPI can lead to centralization and compromise digital security

DPI is not neutral and can be used for control or public benefit depending on context

Takeaways

Key Takeaways

There is no clear consensus on the definition and scope of Digital Public Infrastructure (DPI)

DPI can potentially break monopolies and increase competition, but also carries risks of centralization and compromising digital security

The implementation and governance of DPI should involve multi-stakeholder processes and consider local contexts

There are differing perspectives on DPI between the Global North and South, partly due to existing infrastructure gaps

Funding and institutional frameworks remain key challenges for DPI development in many countries

Resolutions and Action Items

Work towards a narrower, value-driven definition of DPI

Analyze how DPI is proposed and implemented rather than labeling it as inherently good or bad

Pursue more bottom-up approaches to DPI development where countries are ready

Develop open standards and libraries for DPI that can be used by different stakeholders

Study successful DPI implementations in countries like India and Brazil to learn lessons

Unresolved Issues

How to define which digital services qualify as DPI

How to balance sovereignty-based approaches with the need for global interoperability

Where funding will come from for DPI development in resource-constrained countries

How to address institutional barriers to infrastructure development in some countries

How to ensure DPI promotes inclusion and public benefit rather than control or surveillance

Suggested Compromises

Focus on developing the ‘minimum viable infrastructure’ that protects free internet, markets and democracy

Combine regulation with openness and transparency in DPI development

Balance country-specific DPI approaches with efforts towards cross-border interoperability

Consider both software-based DPI and physical infrastructure needs in developing countries

Thought Provoking Comments

DPI movement is one new world for various approaches. And you can connect it to infrastructures, to public service, to sometimes digital commons, to platform strategies.

speaker

Henri Verdier

reason

This comment highlights the broad and evolving nature of DPI, showing it’s not a single defined concept but encompasses multiple approaches and ideas.

impact

It set the stage for a nuanced discussion about what exactly constitutes DPI and how it manifests in different contexts.

The Brazilian experiment with PIX has been an evolution of what has been copied de facto from the UPI in India and actually maybe the Indians will not disclose that they have copied this from Russia because Russia introduced his system MIR after the invasion of Crimea in 2014, when it was sanctioned immediately the day after by the US with the prohibition of Visa and MasterCard.

speaker

Luca Belli

reason

This comment provides important historical context and shows how DPI initiatives have evolved and spread globally, often in response to geopolitical pressures.

impact

It broadened the discussion beyond just technical aspects to consider geopolitical and historical factors shaping DPI development.

DPI is not neutral. DPI and how it’s rolled out, how people experience it, will be shaped by the context. An authoritarian regime is going to use DPI to enhance control. And a revenue-focused, revenue-connection-focused regime is going to use it to enhance collection. And a public services, public benefit-oriented regime is going to use it to create more inclusion.

speaker

Annelies

reason

This insight highlights that the impacts of DPI depend heavily on the motivations and governance structures implementing it.

impact

It shifted the conversation to consider the critical importance of governance frameworks and oversight in DPI implementation.

We have now Bill Gates blogging about digital public infrastructure because of course this is a fashionable concept that’s been co-opted by Microsoft as usually happens with all nice concepts.

speaker

Luca Belli

reason

This comment raises important questions about the co-opting of DPI concepts by large tech companies.

impact

It prompted discussion about potential conflicts between public interest goals of DPI and commercial interests.

One of the concerns I have with this notion of sovereignty when it comes to DPI is that you could very easily recreate that situation in which local actors within a state essentially, if the DPI is not open and not standardized in a way that is facilitating competition, it can very easily lead to recreation of national oligopolies or national monopolies.

speaker

Milton Mueller

reason

This comment highlights a key tension between national sovereignty and open, competitive systems in DPI development.

impact

It sparked debate about how to balance national control with openness and competition in DPI.

Overall Assessment

These key comments shaped the discussion by highlighting the complex, multifaceted nature of DPI. They moved the conversation beyond technical definitions to consider historical context, geopolitical factors, governance implications, and potential pitfalls. The discussion evolved to grapple with tensions between national sovereignty and global interoperability, public interest and commercial motivations, and the need for both government involvement and market competition in DPI development. This nuanced exploration revealed DPI as a concept with significant implications for digital governance, economic development, and global power dynamics in the digital age.

Follow-up Questions

Which digital services qualify for the label of digital public infrastructure?

speaker

Jyoti Panday

explanation

This is important to clarify the scope and definition of DPI, as there is currently a lack of consensus on what should be included under this label.

What values should translate into something being labeled as DPI?

speaker

Jyoti Panday

explanation

Understanding the core values and principles that define DPI is crucial for developing consistent frameworks and policies.

How can we ensure interoperability between different DPI systems across countries?

speaker

Henri Verdier

explanation

Interoperability is key for enabling cross-border use of DPI and preventing fragmentation.

How can we balance the need for digital sovereignty with international cooperation in developing DPI?

speaker

Luca Belli

explanation

This tension needs to be addressed to prevent fragmentation while allowing countries to develop context-appropriate solutions.

How can we ensure transparency and accountability in DPI systems?

speaker

Luca Belli

explanation

Transparency is crucial for building trust and enabling proper oversight of DPI systems.

What role should regulation play in the development and governance of DPI?

speaker

David Magarde

explanation

Understanding the appropriate balance between regulation and market-driven approaches is important for effective DPI development.

How can we address the digital divide and ensure meaningful connectivity alongside DPI development?

speaker

Luca Belli

explanation

Ensuring widespread access to connectivity is crucial for the success and equitable impact of DPI initiatives.

How can developing countries take control of their own digital infrastructure development?

speaker

Audience member

explanation

This is important for addressing concerns about external influence and enabling context-appropriate solutions in the Global South.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #119 AI for Multilingual Inclusion

WS #119 AI for Multilingual Inclusion

Session at a Glance

Summary

This discussion focused on the role of AI in promoting multilingual inclusion and expanding internet access to diverse language communities. Participants explored challenges in developing AI systems for less common languages and strategies to address these issues.

Key points included the importance of data collection and documentation of local languages to train AI models effectively. Speakers emphasized that communities should actively create and share content in their native languages online to build robust datasets. The need for improved internet connectivity in underserved areas was highlighted as a crucial step in enabling diverse language representation online.

The discussion touched on efforts by organizations like the Internet Society and Pan-African Youth Ambassadors on Internet Governance to promote multilingualism through training programs and community networks. Speakers noted the importance of innovation and local solutions in developing AI tools tailored to specific language needs.

Challenges discussed included the dominance of major languages in AI development, the potential loss of minority languages, and the need for greater representation in tech fields. The conversation emphasized the role of governments, academia, and industry in collaborating to advance multilingual AI development.

Participants stressed the urgency of preserving and promoting linguistic diversity online, calling for active engagement from communities to document and digitize their languages. The discussion concluded by highlighting the power of individuals and communities in shaping the future of the internet and ensuring linguistic inclusivity in the digital age.

Keypoints

Major discussion points:

– The importance of developing AI models and tools in multiple languages beyond just English

– The need for more data and content in diverse languages to train AI systems

– The role of governments, academia, industry and communities in promoting multilingual AI development

– Challenges in preserving minority languages and including them in AI/technology

– The connection between language, culture, and digital inclusion

Overall purpose/goal:

The discussion aimed to explore how AI can be leveraged to promote multilingual inclusion and expand internet access/content in diverse languages, especially for underserved linguistic communities.

Tone:

The tone was largely informative and collaborative, with speakers sharing insights and experiences from different perspectives. There was an underlying sense of urgency about the need to act to preserve linguistic diversity in the digital age. The tone became more action-oriented towards the end, with calls for participants to actively document and promote their languages online.

Speakers

– Jesse Nathan Kalange: Moderator

– Athanase Bahizire: Internet Society alumni, facilitator of Pan-African Youth Ambassador on Internet Governance, engineer

– Claire van Zwieten: Alumni specialist at Internet Society

– Ida Padikuor Na-Tei: From East African region (did not speak in the transcript)

Additional speakers:

– Alejandra (no surname available): Mentioned as able to provide information about Internet Society empowerment programs

– Miriam (no surname available): From Kenya, ambassador in PAYAG Swahili cohort

– Abdul Rehman: From Lahore, Pakistan

– Grace Ngijoi: From Cameroon

– Kenli Kosa: From Mozambique

– Abineth Sentayo: From Ethiopia

– Vlad Ivanets: Youth Ambassador of Internet Society

Full session report

Expanded Summary of Discussion on AI and Multilingual Inclusion

Introduction

This discussion focused on the role of artificial intelligence (AI) in promoting multilingual inclusion and expanding internet access to diverse language communities. Participants, including Internet Society alumni and youth ambassadors, explored challenges in developing AI systems for less common languages and strategies to address these issues. The conversation featured key insights from Athanase Bahizire and Claire van Zwieten, with moderation by Jesse Nathan Kalange.

Key Themes and Arguments

1. AI Development and Multilingual Inclusion

The discussion emphasized the critical need for developing AI models and tools in multiple languages beyond English. Athanase Bahizire highlighted that AI models require diverse language data to be truly inclusive. He explained, “For us to have AI models, we need to have data. It’s just like a human being, for you to start speaking, you need to listen. After listening, okay, you understand, you learn, then you can speak, then you can deliver. It’s the same with AI, it has to learn from the data and then deliver.”

Claire van Zwieten noted that the Internet Society promotes multilingualism in its programmes, recognizing the importance of linguistic diversity in AI development. The Internet Society works in four main languages (Arabic, Spanish, French, and English) for their official trainings, while their chapters work in many more languages locally.

Audience members pointed out specific challenges with current AI tools, such as difficulties with Swahili greetings and a Punjabi resume-building project, underscoring the need for improvement in this area.

2. Data Collection and Language Documentation

A crucial point of agreement among speakers was the importance of data collection and documentation of local languages to train AI models effectively. Athanase Bahizire stressed that documenting local languages and content is vital for AI development and cultural preservation. Claire van Zwieten concurred, noting that AI can help preserve endangered languages if properly developed.

Claire provided a concrete example of the Navajo tribe’s efforts to preserve their language using AI, which inspired further discussion about practical applications of AI in preserving minority languages and cultural heritage.

3. Connectivity and Content Creation

Athanase Bahizire highlighted the crucial role of connectivity in enabling diverse language representation online and AI development. He emphasized the importance of community networks in improving connectivity and enabling content creation in local languages.

4. Challenges in Multilingual AI Development

Several challenges were identified in developing AI systems for multiple languages:

– Lack of quality data in many languages

– Technical challenges in accommodating non-Latin scripts

– Limited representation of diverse languages in AI development

These challenges underscore the need for innovation and local solutions, as emphasized by Athanase Bahizire.

5. Promoting Language Equity and Inclusion

The discussion touched on several strategies to promote language equity and inclusion:

– Encouraging learning and use of multiple languages

– Ensuring public services support multiple languages

– Increasing diversity in language representation

– Supporting local chapters working in their languages

– Documenting cultural heritage and traditional knowledge

– Leveraging grants and funding for language preservation projects

6. Collaboration for Multilingual AI Development

Jesse Nathan Kalange highlighted the need for a multi-stakeholder approach involving government, industry, and academia to advance multilingual AI development. Claire van Zwieten discussed the Internet Society’s role in connecting the unconnected, while Athanase Bahizire stressed the importance of local initiatives and innovation.

Claire van Zwieten also emphasized the need for more women in ICT and the importance of mentorship. She highlighted the Internet Society’s efforts in empowering youth to become future internet leaders, including the Pan-African Youth Ambassadors on Internet Governance program, which Athanase Bahizire described in detail.

Thought-Provoking Comments and Their Impact

Several comments sparked deeper discussions:

1. Athanase Bahizire provided historical context about AI, noting, “AI is something that is just coming now, but we have been having AI systems from long ago, and they are still developing.”

2. Claire van Zwieten highlighted a critical paradox in AI development, stating, “AI is great for the digital divide because it helps bring some people up but it also very deepens it.”

3. Athanase Bahizire emphasized the importance of local innovation, stating, “We need to build our own systems.”

Conclusion and Future Directions

The discussion concluded by highlighting the power of individuals and communities in shaping the future of the internet and ensuring linguistic inclusivity in the digital age. Athanase Bahizire’s closing remarks stressed the importance of innovation and being part of the solution.

Several follow-up questions were raised, indicating areas for future exploration:

1. How to encourage local communities to produce better quality text content in their languages

2. Ways to empower local communities to use AI systems in their own languages without fear

3. Methods for tailoring AI to support underserved or minority-language speakers

4. Concrete strategies for documenting languages

These questions underscore the ongoing challenges and opportunities in developing multilingual AI systems and promoting linguistic diversity in the digital realm.

The session concluded with the announcement of gifts for participants, highlighting the collaborative and engaging nature of the discussion.

Session Transcript

Jesse Nathan Kalange: you you you you Can they hear me? Yes, can you? I think so Can you hear me? Yes, I can I can hear you and you can hear me. Perfect We’re here. Great Thank you for being a little bit late But, you know, technical issues. Okay, hello. We can kick off this. You have your mic. You’re going to be leading this session. I don’t think you need that if you’re talking. Yeah. Just try. Try. Hello. Hello. Perfect. Hello. Yeah. Okay, great. So the floor is yours. You start. And we have also people online and everything is working. Okay. Ready? Perfect. Yeah. Hello. Yeah. Good. Good evening here in Riyadh. Good morning. Good afternoon, everyone. Watching NASA. Thank you for joining to this section. And we’re going to have a very insightful section between the Pan-African U.S. Ambassador for Internet Governance, Internet Society Foundation, and ISOC alumni group. Today I have here Atanasi Bahaziri, who is from DLC, IGF. I also have Clary Van Zweten from Internet Society Foundation. Sorry, NASA will be joining us. And I also have Ida from also the East African region. We’re going to talk about AI for multilingual inclusion. And before we start, we’re going to have more discussion, which is practical. And we have some youth also here that we want to engage them and include them in this section. With the opening aspect of this section, we’ve seen so many challenges of expanding Internet access and its availability in local content across many languages. Whilst working towards this in advancing human rights and inclusion at digital age, AI for multilingual inclusion, we also discussed how Internet can be expanded to greater languages and inclusion in all aspects of. So, I’m going to talk a little bit about how we can make the internet accessible to everyone. So, let’s start with our work. Through the use of multilingual AI system, we can engage digitally-isolated population, people who are isolated from the internet in terms of language barrier. Granting equal access to information. In this, we want to improve digital literacy and education efforts in making internet content available to everyone. So, we are working on that. We are working on that in terms of five languages. It’s been a very awesome time to welcome my speakers here. I will give them just one minute for them to introduce themselves. So, looking at online, I have Clary here. Clary, if you can hear me, just a minute, just introduce yourself, then I will move to Atanasi, then the rest will join us. All right. Thank you.

Claire van Zwieten: So, first, hi, everyone. My name is Clary. I am the alumni specialist at the Internet Society. I have the wonderful job of working with our alumni, two of which are there on the stage today. And also, Ida, who is also here. Thank you so much for coming to this session, and I’m very excited for this exciting talk about how we can create more internet access for people who are not speaking the dominant languages of the internet. Thank you.

Jesse Nathan Kalange: All right. Thank you. Let me move to Atanasi.

Athanase Bahizire: Thank you so much. I’m Atanasi, an Internet Society alumni, and I’m one of the facilitators of the Pan-African youth ambassador on internet governance. I’m an engineer by profession, and when it comes to the IGF ecosystem, I coordinate the youth IGF in the DRC. I’m very happy and looking forward to the discussion. Thank you. All right. Thank you.

Jesse Nathan Kalange: Thank you so much. Ida will join us very soon, and we can start giving the remarks. I want to start from Atanasi. You mentioned facilitator for Pan-African Youth Ambassador for Internet Governance. Can you give a brief information about that, how multilingualism is focused on the training that you are doing at Pan-African Youth Ambassador for Internet Governance? And we dive into the next question for Clary. Thank you.

Athanase Bahizire: Thank you so much, Selby. So basically, we have been seeing a raise of participation of different actors in the Internet Governance Forum and other Internet Governance-related activities, but we realized that there was a lack of meaningful participation from Africa. When we tend to look deeply into it, we realized that many African countries don’t speak English and non-English-speaking countries, they tend not to be active. So we tried to play our part in the solution, and we came up with this initiative of the Pan-African Youth Ambassadors on Internet Governance. Basically, it has five cohorts with five different languages. The target is to train 1,000 young people per year within five different languages, so 200 per language. We have five cohorts, one from the Arabic court, the Portuguese court, the Swahili court, the English court, and the French court. What is very unique in this program is that we have introduced some African languages that are only spoken in Africa, and some other languages that are not widely spoken. so that we build capacity of the different participants, of the different African youth, so that they understand the stakes of the internet governance. And then we guide them through mentorship so they can join these discussions and be able to participate and also can contribute locally to different ideas in the different countries or region. So briefly, that’s what is about the Pan-African Youth Ambassadors on Internet Governance.

Jesse Nathan Kalange: All right, thank you, Atanasie, for highlighting on that. Clary, I wanna ask you, I wanna move to you. Before people, the fellowship, the ambassadors of internet society could come out as an ambassador, I know that they don’t just come out like that, there’s training. And I’ve seen some couple of training in also different languages internet societies working on in terms of closing that kind of barrier within languages and internet governance and other trainings. Can you also highlight on what internet society foundation and internet society as a whole is doing in terms of training, I mean, multilingualism within the world? Thank you.

Claire van Zwieten: That’s a great question. Thank you so much for asking. And at the Internet Society, as a global organization, we are committed to making sure that the internet is for everyone. That really is what we are striving for. And that is our goal, that’s how we think. And the internet cannot be for everyone unless everyone has access to it and can read it. And of course you cannot read the internet if you’re not speaking the language of what is written. So at the Internet Society, we really do our best to include as much interpretation as possible when we communicate with our community. We do so in Spanish, French, Arabic when we can, and we’re very committed to making sure that whenever we are speaking with our community or speaking with people beyond our community, we are providing them access to be able to understand what we are saying in the language they are most comfortable in. So we… really try to walk the walk when it comes to multilingualism and the internet by providing as many interpretations of options as we can do.

Jesse Nathan Kalange: Okay, all right, thank you. Alejandro is also here, but today this session is not going to be panel discussion. It’s going to be a group discussion as we are here. Before we go to the next discussion, we’ve seen that there are some AI language tools and AI system that is coming that we are training on large language models. I want to ask in the room so that we have the conversation with you before we come back to all that. In terms of language, communication with some of the AI tools like Googlebot, chatGPT, which is very common. Have you tried communicating with this AI language models, AI tools with your local language, and how does it look like? Are you still limited to start with English because it can provide only the answers in English? Have you tried different languages with this AI tool? So if you have that knowledge you can share with us. Yeah, you mentioned your name, your country, and your organization you are representing.

Audience: Thank you. Okay, hello everyone. My name is Miriam from Kenya. I’m an ambassador in PAYAG, another Swahili cohort. So personally I’ve interacted with chatGPT. I use it all the time, but for Swahili it’s a bit tricky because even the basic like greetings. I’m supposed to ask you habari yako and then the response is njema. So for the AI, it really hallucinates. It gives you wrong answers and then you have to tell it no that is the wrong answer and then the next time I ask you habari you respond with njema. So for the Swahili, it’s not really that good, but it’s really doing well. Maybe not for the common, because greetings are different. Maybe in Kenya and Tanzania they are a bit different, but generally Swahili it’s okay for me.

Jesse Nathan Kalange: Okay, all right. So

Audience: Okay, we have a, my name is Abdul Rehman and I’m from Lahore, Pakistan. We have worked on, with JGPT in Urdu and Punjabi, Urdu is a bit good, but in Punjabi it has some issues. We’re trying to build a platform for local, like daily wage earners, earners like plumbers and those people to make their resume by speaking their Punjabi accent or Punjabi language. So that’s a bit hard thing for open AI right now. That’s my take. Okay. All right. Yeah. Hello everyone. My name is Grace Ngijoi from Cameroon and concerning local languages is a bit, it’s a little bit complicated with our country because we have more than 200 languages, local languages. So, but actually there’s a, there’s a young Cameroonian who’s working on it, but he specialised on common languages with a huge, with a, at least with some majority, like Eundo, Basa and Bamilike because we have more people that come from that side. So, but it’s not yet effective, but at least with the news that we are having from him, it’s something that you can help us a lot in terms of communication with our local people.

Jesse Nathan Kalange: Okay. All right. Thank you. Do we have anyone again? Okay. So, and we’ll come back to another question. Let me go to this. We’ve seen the aspects of looking at how AI has been shifted among local languages. We have, you mentioned PAYAC has five languages, Swahili, Arabic, Portuguese, English and French. English is common. So, that’s the general that it is. Thank you. Okay. AI, most AI language models are trained on. Now, I want to ask the same question to go to Clary, then Clary also explain perspective from the ISOC side. We want to understand that, in terms of this digital language divide, because we see there’s a vast difference from English to the other five languages. How can we ensure that there is an equal AI technology access for all speakers in all languages?

Claire van Zwieten: It’s a great question. And I think that that points to one of the most fundamental questions and challenges that we face is having data to train on. It’s so much of the content on the internet is in English. And when AI systems need data to train on, many of the data sources they use are in English. So having ample data sources of other languages will be instrumental to making sure that we’re able to use AI for multilingual inclusion on the internet. And I think that until we’re able to gather enough data about every other language that we want, we’ll be able to use the internet way more accessibly, but it’s the biggest hurdle is really having LLMs being able to be trained on this data. So until we have more access and more content, we won’t be able to do it.

Jesse Nathan Kalange: Okay, all right, Atanasy.

Athanase Bahizire: Thank you so much. Let me just give a bit of context. I told you I’m an IT engineer by profession, and there is some stuff we don’t understand about AI. AI, basically we say AI artificial intelligence. Basically it’s the ability of the machine to mimic what the human brain can do, the task we can do. And we have seen a wide hype of AI now with the LLMs, the language models. And we think that AI is something that is just coming now, but we have- been having AI systems from long ago, and they are still developing, they are still developing. And something Claire just mentioned, very important. For us to have AI models, we need to have data. It’s just like a human being, for you to start speaking, you need to listen. After listening, okay, you understand, you learn, then you can speak, then you can deliver. It’s the same with AI, it has to learn from the data and then deliver. So there is this, when it comes to multiple languages, and there is this divide we have, that we think many of our languages are not documented. And that equals, they can only deliver, AI systems can only deliver from the data they’ve got. There is an issue now we have in different, our different communities now. We think when you use, you publish content in your local language, you feel like many people won’t see it. And so you can speak French, you can speak Swahili, you can speak Wolof, but you will go to English because you feel like that’s where you’ll get a wider audience. But then we create content on English and that data is going to feed English models and it will generate, it will help generate AI models in English. So one of the things I used to tell people is, if you want, if you feel like there is this disparity, the only way we can solve it now, as of AI is not that far, it’s still, okay, the hype is still new and we can document our local perspectives on the internet. So it’s about the data we put on the internet. So when you create content, yeah, create. you can make videos in your local language, you can publish articles in your local language and that will help feed these models. One other thing like we have seen use cases in countries like Rwanda in Africa, where they’ve managed to create a model that they can ask the AI any question related to legislations and it will give them answers and references like this is in this bill in this law and is under which article. So what is happening is that they’ve got all the legislations in their local languages, but they were in paper based. So what they’ve tried to do is to correct that data. They supported young entrepreneurs, actually young students, young startups with hackathon and competition and tell them, so you have to build a data project where you take this local bills and you digitalize them first of all, because that’s the first point. You get them online, that’s the second point. And then within the data, how can I say, within the data curriculum, within the data line, you need to have the data. Then there is what they call data cleaning. Some of the terminologies you see, they are not accurate. So they used to review that data, cleaning it to make sure it’s accurate and can go to this platform. And after they’ve got the data on internet, now they can start training the models on that data. And training, trying in machine learning, we used to say accuracy of your model. So you’re going to train your model, but sometimes it won’t respond accurately. Most of the time, even we see sometimes charge EPT that is really big, can’t respond accurately. So what we do is someone was talking about, you ask a question and then the AI can’t reply, but then you tell it’s the answer. And then next time when you ask it to respond with the good answer. So that is one of the three types of AI, I mean, machine learning. The first one is, you just, you give the answer. You said, this is a tomato. You give a picture of a tomato and you also tell it, this is a tomato. So when someone asks, it will say, this is a tomato. The other model is you give a picture of a tomato, you don’t say anything. And then it will tell you, okay, this is a tomato based on what it has got on the internet. The other way is you give it a picture of a tomato. It tells you it’s an orange. You say, no, this is not an orange, this is a tomato. It’s going to save the new data next time when you ask to be more accurate. And this is actually the best way of learning. The same as our children, like you tell, no, this is not good. So next time it knows, if I go this way, I may fall, this is not good. So next time it’s going to do it the proper way. So that’s the same thing with AI. And when it comes to multiple languages, it’s not the only way we can build strong AI models in our languages is by training it with the data we have. And so I really encourage us when in your usual life, when you want to do your work and you feel like you can do it in a non-English language, do it, it’s important. And we definitely will need at some point this various data.

Jesse Nathan Kalange: All right. Thank you. So Atanasis, you made a very good point. I’m interested in some part, but I will still come back to you later. Let me see if Aida is online and you can unmute. You see, with the AI perspective, it also starts from the literacy aspect of AI. We can feel that there is a gender disparities within the AI and also the multilingualism, whereby maybe some, most females can speak very good and well understand their local languages. But the fact that these tools cannot be connected to their languages so that they can navigate the use of AI, so it becomes very dividing when we are talking about AI. If Aida is there, maybe you can ask me, what do you think that we can do to close the gender disparity when it comes to the multilingualism and the use of AI? If Aida is not, maybe Clary can talk on that for us because I know Internet Society also promotes gender equality and that. Thank you.

Claire van Zwieten: Absolutely. I don’t think Aida is able to speak, so I’ll speak on her behalf. But the Internet Society is deeply committed to bridging the digital divide and including women in the Internet space is a huge part of that. So we really are committed to making sure that women have access to trainings and they’re able to gain all of the opportunities of the Internet the same way. And a lot of that has to do with making sure that women have access to training, women have access to mentorship. There are a lot of studies that show that women who go into technical fields but don’t have a mentor are less likely to complete whatever training they’re doing and less likely to succeed in that field. So I think it’s one of the issues that kind of piles on each other. So we need women in ICT and we need… So they can mentor the younger fellows who are coming in and need a woman to help guide them through that process because it is different to be in a male-dominated field. like the technical community tech really is. And something else I want to mention that you mentioned before is that AI is great for the digital divide because it helps bring some people up but it also very deepens it. Because if you’re in an area where your language is not represented on the internet, largely because you don’t have like very good access in your area, you’re not going to be able to use the benefits of AI either. So it compounds on itself very quickly, just as the issue of women in ICT does. And women do bring a lot of value and perspective to the field that is very necessary to keep it moving forward. And I look forward to even more organizations beyond just the Internet Society and PAIAG working to bring more women into the field.

Jesse Nathan Kalange: Okay, all right, thank you very much. And PAIAG, we also promote gender equality. So we make sure that the selection of people to learn about this new language models and multilingualism, we try as much as possible to get more people. As I mentioned that Rwanda as a country has done a policy AI chat whereby people can get information on that. Now, as I said, to get back to you on that, and also as an engineer, because there are some also technical stuff I need to ask you so that we all learn. I want to go back to my people because it’s an engagement we are doing with them. Do you think that, what has your country been able to do to develop maybe AI in terms of AI and multilingualism content? Has your government or your country been able to develop something out that you think that may be the future? Because I quite remember on the Arabic side, I was hearing that there are some training models that they’ve been trained on data set. But is it for the government? Can someone share insight on the perspective country then we can come back to discussion. see what can be done, as Clary was mentioning, that beyond PAIAGEN, ISOC, we can get other organizations to onboard. So if anyone has a contribution or question on what their government has been done, yeah, in terms of multilingualism.

Audience: Yeah, hi, everyone. My name is Vlad Ivanets. I’m the Youth Ambassador of this year of the Internet Society. And I’d like to share my experience, because I’m originally from Russia. And I know that we have the local company, which is quite big, like the big tech company, Yandex, which is also working on the LLM model. And it is quite popular, I would say, probably not only in Russia, but also beyond the country. And they really work hard on creating the competitive system. But they actually have some problems they’re encountering right now. And they said the internet lacking enough resources on the exotic languages, as they call it. And they’re really worried that they will not be able to create the sufficient tool for the AI tool that will be based on these languages. And actually, they also say that they’re running out of Russian resources. And it applies mostly to the high-level resources, because, yes, you can find a lot of information online. But is it qualitative enough? Can you really use it to build an effective model based on this? So they’re doubting this. And I think this is the question that should be addressed as well and discussed among us. How can we encourage the local communities to produce better quality of the text? And how we can empower them just to use AI systems and not be afraid of using it on their own local language. Yeah.

Jesse Nathan Kalange: All right. Thank you very much. Very nice question, because that was the next question that was coming. coming to also delve much into in terms of the culture, the dialects and other aspects related to AI. As someone is saying that I greeted AI in Swahili but the response is not as much as expected. So we see that the AI languages, because it has not been trained in Swahili, the culture and the dialects and the modalities within that kind of languages is being changed. And as my fellow ambassador is also saying that in terms of losing the quality of data, because we believe that the communities have a lot of local content they can produce. But what I also see is that we see a lot of disconnecting, disconnection between the rural people and how they can bring that down because people are not also connected. They don’t have internet access, they don’t have access to mobile. So the content creation in local language is also limited. And we can also look into that. Can I also have any other perspective in terms of some countries which are doing a work like how Rwanda is doing in terms of policy aspects where people are giving, and the government is supporting AI Muslim realism or promoting local contents through AI. Is it some innovations that the youth have been able to broaden? We can move it to the next question for our speakers. Okay. Yeah, Clary has something to share. Yeah, all right, Clary.

Claire van Zwieten: I’m based in Amsterdam, but I don’t sound like it because I spent most of my life in the United States. And there has been some amazing work done by the Navajo tribe, which is a tribe indigenous to the United States. And their language is disappearing. As new generations are growing up, they’re not learning Navajo the same way their parents or grandparents did. So there has been a huge push among young people in the region to make sure that their language is protected. through AI. So they’re helping feed learned language models through the documents and the text that they have to make sure that people who are learning Navajo in school are able to use ChatGBT in Navajo when they want to ask about their assignments or if they want to know something about their cultural history they’re able to ask it in their native language. So and because of that I mean language is so intrinsically connected to culture. So when you lose so much of your language you end up losing your culture in parts as well. So I give a lot of credit to the Navajo Nation and the work of their young folks in making sure that their language is protected as generations go on and the schools in that neighborhood no longer teach that language. So that is one example of a community around the world that’s using AI to make sure their language is not only preserved but is usable and has effective use in the coming centuries as it’s probably going to happen that their schools no longer teach it.

Jesse Nathan Kalange: Okay all right Clary. Let me come back to Atanasi. You mentioned that even in software engineering you see that the data that we feed in AI is not that much in terms of multilingualism and also the sources that we get those data, quality of data as such. Even the language model in terms of developing local content on AI models, do you think that there is an open source software that has allowed that we can have that multilingualism in that in terms of development from your perspective?

Athanase Bahizire: Thank you so much. Very good question. Actually it’s very quick. You know what, these big AI models we know they are developed in California. So what do you expect someone who is developing this system in California to put pigeon? How? So the idea here is innovation and the only way we can innovate is by being building our own systems. Yeah. Many of these, uh, basic, uh, source codes are open. So Slack, open AI at the basis, it was open source. There is, uh, at the Emirates in Dubai, one research, uh, university is working on AI, if I recall, FA is like for us, I don’t recall the name in Arabic, but so basically what’s the trying to do is to build a strong, uh, because at least they’ve got enough, uh, enough data in Arabic, but then they try to build a strong AI model that is first based in Arabic. Now it’s, uh, it has English. It has some other languages, but it was first, uh, based in Arabic. So you see it’s promoting their own perspective. So what I can see if we want multilingualism, it’s all about innovation. We need to build our own systems. And, uh, we have many of, uh, these resources like education in the past, it was all about going to big universities and, uh, you know, lending all these big opportunities, but now through the internet, as, uh, uh, Claire was saying the internet is an enabling tool and through the internet, we can learn some of these things, many of the software engineers you see today, they will tell you, uh, 90% of their skills, they didn’t learn it in university. So that is it. The internet is enabling us to do wonders. And so I can encourage, if you feel like you’re interested in one of these fields, you, you’ll find resources on the internet, Google on, you can start. There are many open source, uh, even Gemini, the previous Google has a level that is open source that you can take and build on something. We have a Congolese boy who tried to build on Gemini, a certain model that, okay, um, in our country, the traffic can be. terrible. So he’s trying to see building on Gemini. So he doesn’t really build the data set by himself, but is building on the open source AI to take it on our own traffic now and try to see if we can find solutions to you know, solve that particular problem. And it can happen. I’m hopeful that by the two or three years, he will be able to come up with a strong solution. So that is one example, some of the examples of the people are trying to leverage the open source resources. Another thing I could say is innovation is something it’s there is also this culture of loving what you do. I’ve seen a lady who studied economics in university, went to an MBA, but at some point, I was asking her advices in code. She decided to start coding and now she’s very good. She’s very good. And, and at some point, learning a new language, and I’m like, Oh, she knows this very well. It’s not like she studied curriculum in in tech, but she got access to, you know, different resources. And now she’s good at it. And she can do great. So I’m encouraging people, if you feel like you have the interest, you do what you can, and make sure you document your local perspectives, because that will really help in this inclusion in this diversification. There is one thing maybe when we talk about multilanguage IDN, we call it technically IDN. It’s non Latin scripts on internet. Just a quick example, when you get on your email, on your email inbox, an email that is studying, I don’t know, maybe let’s say in Amharic, a certain, it’s a name, arts, another script in Amharic dot something, you will definitely, when you see it, you will feel like it’s a spam. First thing you’d be like, hmm, what script is this one? You’d be like, no, this is a spam. But we are talking about multi-language. So that these other, we now have domain names that are in Arabic, in Russian, like even the Russian TLDs, they have a version that, RU, they have a version in Russian. The same in many other countries. In Egypt, they have one in Arabic. We have some other one in Chinese. So sometimes when it comes also to multi-language, it’s also about accommodating scripts that are non-English. And this technically, technically speaking, it’s a challenge to developers. So if many of the developers, that if you don’t really increase the budgets to accommodate this, they feel like it’s an extra work. So there is this spirit of wanting to go global. You see, I have an e-governance platform I’m developing, and you want people to apply for visas in Guinea or in Djibouti. Doesn’t you speak English or you use Latin scripts, but someone who is using non-Latin script, this start from left to right. So you’ve seen the email instead of, let’s say, athanas at gmail.com, it should be .com, gmail at athanas. And technically it’s possible, they do exist. But then for me as a developer, I need an extra layer of work to accommodate this kind of emails, of addresses on my platform. But when I have. of an ambition to go global. I can say, I was talking about the e-governance. Some people will come to apply for visa to go to G-booking, but your platform only accommodates scripts that are in English, they call the Latin script. Then someone who is from Bangladesh and is coming with, I don’t know, Bengali script, how will he apply? His email is just in that specific script. How would he apply, use your system? Because he can’t even log in, because he needs to have a different email. But when you want to go global, you will be accommodating this kind of technology. You say, maybe now we don’t have many users who are not from our country, but we expect in the next years, we’ll be having more people. So you design your systems with this inclusion in mind, that will really help us to get to the level of multi-language that we want to be to. Back to you, Fifi.

Jesse Nathan Kalange: Okay, all right. Thank you, Atanas. Before, you were saying that AI, and also Clary mentioned, without regulation from the prospective government, AI cannot be allowed to work functionally as how it’s supposed to work in every country, in terms of the multilingual promotion. Let me come to my people. If anyone have a question so far, then we’ll just wrap up with the last questions and answers to talk about in terms of AI regulation, then we will just wrap up with a session. Do we have any question online or in the room? Okay. Okay.

Audience: Well, I believe that to improve the language equity involves addressing different parts of the… representation of it. So, I grabbed here some points. Excuse me, can you mention your name and your… Sorry? Can you mention your name? I’m Kenli, Kenli Kosa from Mozambique. Okay, alright, thank you. So, I believe that to promote this language equity, I have some points here that I grabbed that could actually help to have this kind of, all the cultural languages to be involved in this. So, I have here, by promoting multilingualism, which is encourage the learning and use of multiple languages in school and communities. I have here, actually, in case language access, which is ensure that the public services provide materials and support multiple languages. I have one more here, which is increase the representation of it. That is to promote diversity in language and representation in literatures, media, and in academia. So, with these points, I believe that they can bring the language equity for all of us around the world.

Jesse Nathan Kalange: Alright, do you have any questions too? Yeah. Clary, do you want to say something?

Claire van Zwieten: Yes, I just want to share that I really like that point that the audience member posed, because representation is such a big part of this, and when you have a group of people who are building a model that is supposed to be for the whole world to use, it is still being created in the context of the model builder’s culture and language. So, when we don’t have people building models from their cultural and linguistic perspective, that means they’re always going to be adapting to the other side. So, as we’re talking about us adapting to different addresses, such as Amharic, and the way that they would be reading differently, it’s important to recognize how long they have been adjusting to us. So, I just want to take a second to highlight that there are many people in the global minority, or the global majority, who are not able to use these systems the way they would like because it wasn’t developed in their context. And that requires way more representation in academia, way more representation in the technical fields, so we have more cultures and more languages involved in creating these kinds of models. Thank you.

Jesse Nathan Kalange: Any questions again? Okay. We have two people there. So from Abineth, then we’ll go to Marion. Okay.

Audience: Thank you so much. My name is Abineth Sentayo from Ethiopia. So the session title, it says, AI for Multilingual Inclusion is Mostly Essential and Inspirational for a Country like Ethiopia because Ethiopia has more than 80 nations nationally live together and also more than 80 nations, 80 languages spoken in that area. So my question is that within 80 languages, more than 80 languages, there is minority-speaking language and also the majority-speaking language will be found in that place. So within the context of internet governance, internet governance, promoting digital equity means ensuring different ethnic and linguistic groups are not left behind technological advances. So my question is that how can AI drive to be tailored to support just particularly for undeserved or minority-language speakers? Thank you.

Jesse Nathan Kalange: Okay. So Clary, then Atanasi, are you going to answer this one? So let me just also add on top of it. It’s a very good question asking because for Clary mentioned that internet society is currently focused on Spanish, Arabic, English, then French. So… These are, when you take maybe, let’s say, central of Africa, we could see that we have about five or more countries who speak French. When you come to West Africa, let’s say, my country Ghana, Nigeria, other countries also speak English. When you go to the North Africa, Egypt, Libya, and other Morocco, they speak Arabic. So they are very OK with that. Within East Africa, Swahili is very dominating among Kenya, Uganda, Tanzania, Rwanda, and stuff. But even in Uganda, what I have experienced is that it’s not that kind of basic language on there. They also have their own language. Now, Ethiopia is part of East Africa, but they also have their own language that they are speaking. Now, we are focusing in terms of, when we group Arabic, French, Portuguese from Mozambique, Sao Tome, Angola, Cabo Verde, they are speaking Portuguese, but sometimes it’s quite tricky and very different. These are grouped. So these are what he is classifying as major languages, because when you pick Spanish, you can get about five or more nationalities that speak Spanish. When we take off French, you can see even in Europe, Africa, other countries is there. When we take off English, it’s a very universal, common language everywhere. So we are working on that. But there are some countries which have different languages that they speak, apart from English, which is also a national language that most of them are speaking, which means that we are not considering that as an inclusion because it is a minor language. And most focus is getting people who can get about five to ten countries who speak this language. So he’s asking that, based on that inclusion that we are talking about. are we also going to look out for these minor languages, maybe let’s say one country is also, maybe about 2 million people in the country or 20 million people are speaking the same language, but it is a minor language because no other country is speaking the same thing. So you want to understand that inclusion. So Atenasi, you can go and then after that, Clary will also give the thoughts what Internet Society is also trying to do.

Athanase Bahizire: Thank you so much. Very good question, actually. I want to give you a perspective here. We have European countries like Slovenia, Romania and so on, who have a population of less than 5 million people, but their languages are on Google, on open AI and everything. And they have a strong foundation in their online presence. In the reverse, there are some countries, there are some languages like the Wolof, which is widely speaking more than one, more than three African countries. We have like the Hausa, is spoken in more than four countries. And we’ve like around 200 million people speaking the language. But if you’re talking about many language in the sense of the population, the population that speak Wolof may be bigger than the population that speak Romania, blah, blah, blah. So the idea is here is not only, it’s not really about the language itself, but it’s about how you document your own language. So that what I was saying again, I encourage you to document your languages. The other thing is after documentation, there is a connectivity. At some point Claire talked about it. There is a case in India. Like 10 years ago, they were at the same level as many of the African countries. But they’ve. got a solution, they have what they call a network of connectivity, fiber and other alternatives. So the country is interconnected, they have the infrastructure in place. And we have seen when the infrastructure is there, the connectivity is there, their e-commerce is very high, it’s very highly raising, their digital literacy. When you go to platforms, you go to YouTube, if you want to cook, among the ten videos you see, you find one. You go to online platforms that are not the big one. If you want a certain thing, you find one. So the mobile banking and mobile money is widely used there. Why? Because the connectivity was there. And then definitely they will document, they will do business, they will try to do the agricultural activity helped by the Internet, because the connectivity is there. So I believe connectivity is very important for us to, at some point, get this, because these applications are on the top of connectivity. We need to have connectivity. And what is the work of the Internet Society? Actually trying to empower communities and build community networks, whereby a community by their own, when they feel like marginalized, they don’t have the big ISPs, don’t find business there. They can build their own access to the Internet from their own. And by there, they will be able to leverage all the benefits that come with connectivity. So that is one of the things that when you have the connectivity, definitely you document your perspective. And when you document, we have the data and we can build AI models. We can build much more things.

Jesse Nathan Kalange: Okay. So, Clary, we are about to close, because we have four minutes. One minute or one and a half, you can take this. We’ve seen Internet Society as an international organization, which we have it in various countries. Looking at collaboration, Internet Society is doing their part. Now, can you give us an understanding? Because we have the government, industry, academia, who do research in terms of assets. And that’s why I was saying that there’s a country that they did their research to see that, let’s say, India, we have to connect them first. And when we connect them, they can create their content on their local languages or even other languages. Now, the government, industry, academia has a role to play. And we all understand that we cannot do that without innovations and stuff. What do you think that, in terms of collaboration, multilingual AI language development, what can we do? And you can share the perspective that all this organization, multistakeholder organization can do. And what’s Internet Society is doing to also support that? And what do you think that others can learn from that? Thank you very much.

Claire van Zwieten: Thank you. That’s a great question. And I would like to start by saying that, while we only cover the Arabic, Spanish, French, and English in our courses, we have over 100 global chapters around the world that are working with their local communities to try to solve their internet challenges, whether they have access or if there’s regulation that is harmful to the internet. They are able to do all of their work in their local language because of that local component to it. So while we are official for our trainings, it’s only those four, we really do, in our broader community, communicate in a much broader variance of languages. And, of course, multilingualism and AI will make the internet much more accessible to everyone. You can’t have AI without the internet. So we have amazing projects at the Internet Society which aim to connect the unconnected. We know that the remaining 2.8 billion are going to be the hardest to connect. So we’re doing the best we can and working with amazing partners around the world to connect those communities to the internet, help give them training so they can continue to maintain their internet. And then beyond that, we’re able to give them the skills to be able to communicate on the internet in their local language so there is greater representation. So it’s a process. And as one organization, we only do what we can, but we hope that with the power of our chapters, our lovely alumni like you two, people who are in our courses and our fellowships, and just our greater community are able to help support us in that mission of making sure that all communities are connected to the internet so they have the ability to use their local content online.

Jesse Nathan Kalange: Okay, all right, thank you. Does anyone have a question? So we are about to close, so they will say their final words. And Atenase, in your final words, just one and a half minutes, you can also highlight on what we can do to improve AI development and research in terms of collaborating to advance AI in multilingualism. Then we’ll just, okay, yeah.

Audience: Okay, should I present myself again? This is Grace from Cameroon. Okay, as an ambassador for PAIAC, I have a question. Atenase mentioned about we document our languages. Can you explain how can we do it? Like, okay, from Cameroon, how can I help my fellow, or how can I help my, yes, the young people to understand that, okay, we need to gather our document. What should they do concretely to gather those documents?

Athanase Bahizire: Okay, thank you so much. Very good. Quickly, we’ve been having in the past libraries, or easily what you can do like, let’s say personally, do you know your grandfather? Do you know their father and something? But if you reach out to some of your family, you can be able to gather specific data about your lineage or your lineage from, I don’t know, some generations. And you know, that is a valuable data you can’t find anywhere else. So you’ll be able to generate a certain information that nobody else can find in the world. And definitely your grandchildren, they’ll find it as a resource, you’ll get it. So there are other solutions. We have, we used to have our traditional musics. Most of the time during weddings, you see people singing all these musics, but now we are getting to, we are tending to forget all of them. But if you find, you have them, you can say, I’m collectioning, I have a collection of old music, I’m going to have them. And you can upload it on a certain streaming platform. That would be data that would be, people will be able to leverage on. And when people will be wanting to learn later on, they can build on that one. So we don’t have time. I think if you need some more information from,

Audience: I’m sorry, I’m skipping, so we have some grants. And I can tell you that there are some projects that are detailization of libraries. So you can take a look and you can, because you are not alone. There are many people on this. So, and sometimes the issue is that you don’t know what to do, or you know what to do, but you don’t have the funding to do it. So the Internet Society also helps you with this. So just for you to know that there are many people doing the same, trying to do that, and we can support you.

Athanase Bahizire: Okay. Yeah, thank you. So I was saying, yeah, if you want. you want to learn more definitely about Payag, about the Internet Society, or some of the technical questions, we can definitely meet later on after this session and discuss informally, because we don’t have time now. So my parting remark, I would say, when we want to build AI and inclusion, multi-language in AI, what we need is innovation, and innovation will come by creating our own solutions to our own problem, and trying to solve our own problems, actually, in using the digital technologies. Some of our countries are experiencing, I don’t know, flood or some other, like volcanoes in my country. And so there is a perspective that no one else have experienced. So you can also create a solution that no one else have created, that’s when innovation come into place too. Innovation is very important, and content creation and data collection. So we need to document, the internet is already there, we need to put on things. So document your expertise, document your life in a way that it can help the coming generation. Then there is one thing I was saying, we, this, there are challenges, but we need to be part of the solution. So there have been challenges when it comes to AI, when it comes to connectivity and all other aspects. So the only thing I can advise is let’s try to be part of the solution, to include our languages, part of the solution to make ethical and inclusive AI solutions.

Jesse Nathan Kalange: All right, thank you so much. Thank you, Atanasie, for that. And Clary, your final words, then we close.

Claire van Zwieten: My final words are, thank you so much for everyone coming to this session. The internet is for everybody. but not everybody has access. So I think it’s important that we have conversations like this on how we can use new and innovative tools to extend the reach of the internet and extend the accessibility of the internet. And I am so thankful to Asanase and Ibrahim Fifi Selby and being here and helping us guide through this conversation. And if you would like to hear more about how you can get more involved, I really encourage you to go to Alejandra who will raise her hand. There she is. And she can tell you more about our amazing empowerment programs which will train you to be the internet leaders of tomorrow, just like the two brilliant men who are on that podium.

Jesse Nathan Kalange: Okay. All right. Thank you, Clary. And thank you, wonderful people for joining. I have a gift for everyone so no one should leave. We have a gift for everyone for this section, joining this section. So our time is up and we thank you for joining. And we thank Alejandra for also supporting this program. If you have a final word, just 30 seconds, then we can all leave. And we appreciate you for joining. We are very happy for this conversation. Thank you. So just thank you.

Speaker 1: I want to say thank you to all of you. And this is an example of what we want to do at the Internet Society. We just give them tools, knowledge, some kind of like a program that it’s from six months or to a year. And that’s it. That’s what we do. The rest is coming from them. They are the stars. So that’s what I’m telling you now. We need you. So we are talking about multilinguism. We need people like you to go there to understand that if we don’t move and we don’t act right now, it doesn’t matter that we have like fundings, we have programs, we have support. At the end, your languages are going to die if you are not supporting them. So think about it. You need to put all the things that you have online, defend your languages and use all the tools that we have like A.I. to be sure that all these languages live and they have future for our kids on the future that we have. And as I said, everything is always depending on us. We are the power here and the internet needs us. So thank you so much. And thank you, the great speakers that we had. Please, a round of applause. Thank you.

Jesse Nathan Kalange: Then we are done. So I’ll just have a seat, then I will just give you is it Yeah, we’ll have a picture. So

A

Athanase Bahizire

Speech speed

147 words per minute

Speech length

3332 words

Speech time

1354 seconds

AI models need diverse language data to be inclusive

Explanation

AI systems require data to learn and deliver results. To have inclusive AI models that support multiple languages, there needs to be diverse language data available for training these models.

Evidence

Example of Rwanda creating an AI model for legislation by digitizing and cleaning local language bills.

Major Discussion Point

AI and Multilingual Inclusion

Agreed with

Claire van Zwieten

Audience

Agreed on

Need for diverse language data in AI development

Documenting local languages and content is crucial

Explanation

To create inclusive AI models, it’s important to document and digitize local languages and content. This provides the necessary data for training AI systems in diverse languages.

Evidence

Suggestion to document family histories, traditional music, and local perspectives as valuable data sources.

Major Discussion Point

AI and Multilingual Inclusion

Agreed with

Claire van Zwieten

Audience

Agreed on

Importance of documenting local languages and content

Differed with

Claire van Zwieten

Differed on

Approach to language documentation

Connectivity is key for communities to create online content

Explanation

For communities to document their languages and create online content, they need internet connectivity. This is crucial for building the necessary data for multilingual AI systems.

Evidence

Example of India’s progress in e-commerce and digital literacy due to improved connectivity.

Major Discussion Point

AI and Multilingual Inclusion

Agreed with

Claire van Zwieten

Agreed on

Importance of connectivity for multilingual content creation

Technical challenges in accommodating non-Latin scripts

Explanation

Developers face technical challenges when accommodating non-Latin scripts in their systems. This includes issues with email addresses and domain names in different scripts.

Evidence

Example of email addresses and domain names in Arabic, Russian, and other non-Latin scripts.

Major Discussion Point

Challenges in Developing Multilingual AI

Need for innovation and local solutions

Explanation

To achieve multilingual inclusion in AI, there is a need for innovation and local solutions. Communities should create their own systems and solutions to address their specific language needs.

Evidence

Example of a Congolese developer building on Gemini to solve local traffic problems.

Major Discussion Point

Collaboration for Multilingual AI Development

Importance of community networks for connectivity

Explanation

Community networks are crucial for providing internet access in areas where large ISPs don’t operate. This connectivity enables communities to document their languages and create online content.

Evidence

Mention of Internet Society’s work in empowering communities to build their own internet access.

Major Discussion Point

Challenges in Developing Multilingual AI

Document cultural heritage and traditional knowledge

Explanation

Preserving cultural heritage and traditional knowledge through documentation is important for language preservation and AI development. This creates valuable data that can be used to train AI models in local languages.

Evidence

Suggestions to document family histories, traditional music, and local perspectives.

Major Discussion Point

Promoting Language Equity and Inclusion

C

Claire van Zwieten

Speech speed

179 words per minute

Speech length

1655 words

Speech time

553 seconds

Internet Society promotes multilingualism in its programs

Explanation

The Internet Society is committed to making the internet accessible to everyone by providing multilingual support. They offer interpretation in various languages for their communications and trainings.

Evidence

Mention of providing interpretation in Spanish, French, and Arabic when possible.

Major Discussion Point

AI and Multilingual Inclusion

AI can help preserve endangered languages

Explanation

AI technology can be used to preserve and protect endangered languages. This helps maintain cultural heritage and ensures language continuity for future generations.

Evidence

Example of the Navajo tribe using AI to preserve their language and cultural history.

Major Discussion Point

AI and Multilingual Inclusion

Agreed with

Athanase Bahizire

Audience

Agreed on

Importance of documenting local languages and content

Differed with

Athanase Bahizire

Differed on

Approach to language documentation

Limited representation of diverse languages in AI development

Explanation

There is a lack of representation of diverse languages and cultures in AI development. This leads to AI models that are not fully inclusive or representative of global linguistic diversity.

Major Discussion Point

Challenges in Developing Multilingual AI

Agreed with

Athanase Bahizire

Audience

Agreed on

Need for diverse language data in AI development

Support local chapters working in their languages

Explanation

Internet Society supports over 100 global chapters that work with local communities in their own languages. This helps address internet challenges and promote linguistic diversity online.

Evidence

Mention of chapters working on local internet challenges and regulations in their local languages.

Major Discussion Point

Promoting Language Equity and Inclusion

Internet Society’s role in connecting the unconnected

Explanation

The Internet Society works on connecting the remaining 2.8 billion unconnected people to the internet. This is crucial for enabling diverse communities to participate in the digital world and contribute their linguistic content.

Evidence

Mention of projects aimed at connecting unconnected communities and providing training for internet maintenance.

Major Discussion Point

Collaboration for Multilingual AI Development

Agreed with

Athanase Bahizire

Agreed on

Importance of connectivity for multilingual content creation

Need for more women in ICT and mentorship

Explanation

There is a need for more women in the ICT field and for mentorship programs to support them. This helps bring diverse perspectives to the field and promotes gender equality in technology development.

Evidence

Reference to studies showing the importance of mentorship for women in technical fields.

Major Discussion Point

Collaboration for Multilingual AI Development

Empowering youth to be future internet leaders

Explanation

The Internet Society focuses on empowering youth through training programs to become future internet leaders. This helps ensure diverse representation in internet governance and development.

Evidence

Mention of empowerment programs that train future internet leaders.

Major Discussion Point

Collaboration for Multilingual AI Development

A

Audience

Speech speed

137 words per minute

Speech length

1056 words

Speech time

460 seconds

Current AI tools struggle with many local languages

Explanation

Existing AI tools like chatGPT have difficulties accurately processing and responding in many local languages. This highlights the need for more diverse language data and improved AI models.

Evidence

Example of chatGPT struggling with basic Swahili greetings and Punjabi language processing.

Major Discussion Point

AI and Multilingual Inclusion

Lack of quality data in many languages

Explanation

There is a shortage of high-quality data in many languages, especially for less common or ‘exotic’ languages. This lack of data makes it difficult to create effective AI models for these languages.

Evidence

Example from Russia where a company is struggling to find sufficient high-quality resources in Russian and other languages.

Major Discussion Point

Challenges in Developing Multilingual AI

Agreed with

Athanase Bahizire

Claire van Zwieten

Agreed on

Need for diverse language data in AI development

Encourage learning and use of multiple languages

Explanation

Promoting multilingualism by encouraging the learning and use of multiple languages in schools and communities is important for language equity. This helps create a more linguistically diverse online environment.

Major Discussion Point

Promoting Language Equity and Inclusion

Agreed with

Athanase Bahizire

Claire van Zwieten

Agreed on

Importance of documenting local languages and content

Ensure public services support multiple languages

Explanation

Public services should provide materials and support in multiple languages to promote language equity. This ensures that all community members can access important information and services regardless of their primary language.

Major Discussion Point

Promoting Language Equity and Inclusion

Increase diversity in language representation

Explanation

Promoting diversity in language representation in literature, media, and academia is crucial for language equity. This helps ensure that all languages and cultures are represented in various domains of knowledge and entertainment.

Major Discussion Point

Promoting Language Equity and Inclusion

Leverage grants and funding for language preservation projects

Explanation

There are grants and funding available for projects focused on language preservation and digitization of libraries. These resources can be used to support efforts in documenting and preserving local languages.

Evidence

Mention of existing grants for digitization of libraries and language preservation projects.

Major Discussion Point

Promoting Language Equity and Inclusion

J

Jesse Nathan Kalange

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Multi-stakeholder approach involving government, industry, and academia

Explanation

A collaborative approach involving government, industry, and academia is necessary for developing multilingual AI. This ensures a comprehensive effort in addressing the challenges of language diversity in AI development.

Major Discussion Point

Collaboration for Multilingual AI Development

Agreements

Agreement Points

Importance of documenting local languages and content

Athanase Bahizire

Claire van Zwieten

Audience

Documenting local languages and content is crucial

AI can help preserve endangered languages

Encourage learning and use of multiple languages

All speakers emphasized the importance of documenting and preserving local languages and content to support multilingual AI development and cultural preservation.

Need for diverse language data in AI development

Athanase Bahizire

Claire van Zwieten

Audience

AI models need diverse language data to be inclusive

Limited representation of diverse languages in AI development

Lack of quality data in many languages

Speakers agreed that there is a significant need for diverse and high-quality language data to develop inclusive AI models that support multiple languages.

Importance of connectivity for multilingual content creation

Athanase Bahizire

Claire van Zwieten

Connectivity is key for communities to create online content

Internet Society’s role in connecting the unconnected

Both speakers highlighted the crucial role of internet connectivity in enabling communities to create and share content in their local languages.

Similar Viewpoints

Both speakers emphasized the importance of local initiatives and solutions in addressing language diversity challenges in AI and internet development.

Athanase Bahizire

Claire van Zwieten

Need for innovation and local solutions

Support local chapters working in their languages

Both the speaker and audience members stressed the importance of empowering diverse groups, particularly youth, to participate in internet governance and development.

Claire van Zwieten

Audience

Empowering youth to be future internet leaders

Increase diversity in language representation

Unexpected Consensus

Technical challenges in accommodating non-Latin scripts

Athanase Bahizire

Audience

Technical challenges in accommodating non-Latin scripts

Current AI tools struggle with many local languages

There was an unexpected consensus on the specific technical challenges faced in accommodating non-Latin scripts and local languages in AI tools and internet systems, highlighting a shared understanding of the complexities involved in multilingual AI development.

Overall Assessment

Summary

The main areas of agreement centered around the importance of documenting and preserving local languages, the need for diverse language data in AI development, and the crucial role of connectivity in enabling multilingual content creation.

Consensus level

There was a high level of consensus among the speakers on the fundamental challenges and necessary steps for promoting multilingual inclusion in AI and internet development. This strong agreement suggests a shared understanding of the issues and potential solutions, which could facilitate collaborative efforts in addressing language diversity challenges in the digital space.

Differences

Different Viewpoints

Approach to language documentation

Athanase Bahizire

Claire van Zwieten

Documenting local languages and content is crucial

AI can help preserve endangered languages

While both speakers emphasize the importance of language preservation, Athanase focuses on community-driven documentation efforts, while Claire highlights the role of AI in language preservation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and centered around the approach to language documentation and preservation, as well as the emphasis on local solutions versus institutional support.

difference_level

The level of disagreement among the speakers was relatively low. Most speakers shared similar goals and perspectives on the importance of multilingual inclusion in AI and internet governance. The differences were mainly in the specific approaches and areas of emphasis, which could potentially lead to complementary rather than conflicting strategies for addressing the challenges of multilingual AI development and internet inclusion.

Partial Agreements

Partial Agreements

Both speakers agree on the need for multilingual inclusion, but Athanase emphasizes local innovation and solutions, while Claire focuses on the Internet Society’s existing programs and support.

Athanase Bahizire

Claire van Zwieten

Need for innovation and local solutions

Internet Society promotes multilingualism in its programs

Similar Viewpoints

Both speakers emphasized the importance of local initiatives and solutions in addressing language diversity challenges in AI and internet development.

Athanase Bahizire

Claire van Zwieten

Need for innovation and local solutions

Support local chapters working in their languages

Both the speaker and audience members stressed the importance of empowering diverse groups, particularly youth, to participate in internet governance and development.

Claire van Zwieten

Audience

Empowering youth to be future internet leaders

Increase diversity in language representation

Takeaways

Key Takeaways

AI models need diverse language data to be truly inclusive and multilingual

Documenting and creating online content in local languages is crucial for AI development

Connectivity and internet access are fundamental for communities to create and share local language content

AI can help preserve endangered languages if properly developed

There is a need for more diversity and representation in AI development to address language inequities

Innovation and local solutions are key to developing multilingual AI systems

A multi-stakeholder approach involving government, industry, and academia is necessary for advancing multilingual AI

Resolutions and Action Items

Encourage people to document their local languages and cultural heritage online

Support and participate in Internet Society programs to become future internet leaders

Leverage grants and funding opportunities for language preservation projects

Promote the learning and use of multiple languages in schools and communities

Increase representation of diverse languages in literature, media, and academia

Unresolved Issues

How to effectively support minority languages with small speaker populations in AI development

Addressing the technical challenges of accommodating non-Latin scripts in AI systems

Balancing the focus between major languages and less widely spoken languages in AI development

How to ensure consistent quality of language data for AI training across different languages

Suggested Compromises

Utilize open-source AI models as a foundation for developing localized language models

Focus on documenting and digitizing existing cultural and linguistic resources as a starting point

Collaborate with local communities and leverage community networks to improve connectivity and content creation

Thought Provoking Comments

AI, basically we say AI artificial intelligence. Basically it’s the ability of the machine to mimic what the human brain can do, the task we can do. And we have seen a wide hype of AI now with the LLMs, the language models. And we think that AI is something that is just coming now, but we have been having AI systems from long ago, and they are still developing, they are still developing.

speaker

Athanase Bahizire

reason

This comment provides important context about AI, clarifying common misconceptions and grounding the discussion in a longer historical perspective.

impact

It shifted the conversation from viewing AI as a new phenomenon to understanding it as an evolving field, setting the stage for a more nuanced discussion about AI’s role in multilingualism.

For us to have AI models, we need to have data. It’s just like a human being, for you to start speaking, you need to listen. After listening, okay, you understand, you learn, then you can speak, then you can deliver. It’s the same with AI, it has to learn from the data and then deliver.

speaker

Athanase Bahizire

reason

This analogy effectively explains the fundamental concept of how AI models work, making it accessible to a general audience.

impact

It led to a deeper discussion about the importance of data in AI development, particularly in the context of multilingualism and local language preservation.

AI is great for the digital divide because it helps bring some people up but it also very deepens it. Because if you’re in an area where your language is not represented on the internet, largely because you don’t have like very good access in your area, you’re not going to be able to use the benefits of AI either.

speaker

Claire van Zwieten

reason

This comment highlights a critical paradox in AI development and its potential impact on linguistic diversity.

impact

It sparked a more critical examination of the potential downsides of AI in language preservation and representation, leading to discussions about the need for inclusive AI development.

There has been some amazing work done by the Navajo tribe, which is a tribe indigenous to the United States. And their language is disappearing. As new generations are growing up, they’re not learning Navajo the same way their parents or grandparents did. So there has been a huge push among young people in the region to make sure that their language is protected through AI.

speaker

Claire van Zwieten

reason

This example provides a concrete case study of how AI can be used for language preservation, making the discussion more tangible and practical.

impact

It inspired further discussion about practical applications of AI in preserving minority languages and cultural heritage.

We need to build our own systems. And, uh, we have many of, uh, these resources like education in the past, it was all about going to big universities and, uh, you know, lending all these big opportunities, but now through the internet, as, uh, uh, Claire was saying the internet is an enabling tool and through the internet, we can learn some of these things, many of the software engineers you see today, they will tell you, uh, 90% of their skills, they didn’t learn it in university.

speaker

Athanase Bahizire

reason

This comment emphasizes the importance of local innovation and self-reliance in developing AI systems for multilingualism, while also highlighting the democratizing power of the internet for education and skill development.

impact

It shifted the conversation towards discussing practical steps that individuals and communities can take to contribute to AI development for their languages, rather than relying solely on large tech companies or universities.

Overall Assessment

These key comments shaped the discussion by providing a comprehensive overview of AI’s role in multilingualism, from its basic principles to its potential impacts and practical applications. The conversation evolved from a general introduction to AI to a nuanced exploration of its challenges and opportunities in preserving linguistic diversity. The speakers effectively balanced theoretical concepts with practical examples, encouraging participants to consider both the global implications of AI in language and the local actions they can take to contribute to inclusive AI development. This approach fostered a rich, multifaceted discussion that addressed both the technical and social aspects of AI in multilingual contexts.

Follow-up Questions

How can we encourage local communities to produce better quality text content in their languages?

speaker

Vlad Ivanets

explanation

This is important for building effective AI language models for less common languages.

How can we empower local communities to use AI systems in their own languages without fear?

speaker

Vlad Ivanets

explanation

This is crucial for increasing adoption and usefulness of AI in diverse linguistic contexts.

How can AI be tailored to support underserved or minority-language speakers?

speaker

Abineth Sentayo

explanation

This is essential for ensuring digital equity and preventing linguistic minorities from being left behind in technological advances.

How can we concretely document our languages?

speaker

Grace from Cameroon

explanation

This is important for preserving linguistic heritage and providing data for AI language models.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa

Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa

Session at a Glance

Summary

This discussion focused on data governance and sharing in Africa, highlighting the challenges and opportunities presented by the continent’s digital transformation. The panel, comprising representatives from various African organizations, emphasized the importance of harmonizing data policies across the continent to facilitate economic growth and innovation.

Key points included the need for a balanced approach to data localization, recognizing both the importance of data sovereignty and the necessity of cross-border data flows for trade and development. Panelists stressed the importance of categorizing data to determine which types should be kept locally and which can be shared regionally or internationally.

The African Union’s Data Policy Framework was discussed as a guiding document for member states, though challenges in implementation were noted, including varying levels of digital readiness among countries. The need for stronger institutional frameworks at the continental level was emphasized to ensure consistent implementation of data governance policies.

Infrastructure challenges, including the lack of data centers and reliable energy sources, were identified as significant obstacles to data localization efforts. The panel also highlighted the importance of building local capacity in data management and analysis to fully leverage the potential of data for development.

The discussion touched on the need for a multi-stakeholder approach, involving both public and private sectors in developing data governance strategies. The importance of aligning data protection laws with trade agreements, particularly the African Continental Free Trade Area (AfCFTA), was stressed to avoid conflicts between data localization efforts and trade facilitation.

In conclusion, the panel agreed on the need for intentional, collaborative efforts to develop comprehensive national and regional data strategies that balance protection, innovation, and economic growth while addressing the unique challenges faced by African countries in the global digital economy.

Keypoints

Major discussion points:

– The importance of harmonizing data policies and governance across Africa

– Challenges of data localization and cross-border data flows

– The need for infrastructure development to support data governance

– Balancing national interests with continental vision for data

– Building capacity and empowering countries to implement data strategies

The overall purpose of the discussion was to explore how African countries can develop and implement effective data governance policies and frameworks to support digital transformation and economic growth across the continent.

The tone of the discussion was generally constructive and collaborative. Speakers acknowledged the challenges facing Africa in terms of data governance but remained optimistic about the potential benefits of improved data policies. There was a sense of urgency in addressing these issues, balanced with recognition that progress will take time and require cooperation between countries and stakeholders. The tone became more focused and solution-oriented towards the end as speakers offered specific recommendations and key takeaways.

Speakers

– Moderator: Session moderator

– Vincent Olatunji: Commissioner, Nigeria Data Protection Commission

– Paul Baker: International Economics Consulting Limited

– Souhila Amazouz: African Union Commission

– Thelma Quaye: Director of Digital infrastructure, skills and empowerment, Smart Africa

– Lillian Nalwoga: Affiliation not specified

Additional speakers:

– Levi Siansege: Internet Society, Zambia chapter and youth IGF

– Abdulmanam Ghalila: Telecom Regulator of Egypt

– Sorina Safa: UNEKA

– Dr. Martin Koyabe: GFC Africa

– Dereje Johannes: UNEKA

– Baratang Mia: Galhype Women Who Code

Full session report

Data Governance and Sharing in Africa: Challenges and Opportunities

This discussion focused on the critical issue of data governance and sharing in Africa, exploring the challenges and opportunities presented by the continent’s ongoing digital transformation. The panel, comprising representatives from various African organisations, emphasised the importance of harmonising data policies across the continent to facilitate economic growth and innovation.

Key Themes and Discussion Points

1. Data Governance Frameworks and Policies

The African Union’s Data Policy Framework emerged as a central topic, with Souhila Amazouz from the African Union Commission (AUC) highlighting its aim to maximise data access and flows across the continent. The framework is built on principles of transparency, accountability, equity, and cooperation. Thelma Quaye from Smart Africa stressed the need for harmonisation of data policies at the continental level to align with this framework. Lillian Nalwoga emphasised the importance of developing intentional national data strategies and policies.

The discussion revealed that two-thirds of African countries have data protection legislation in place, with the Malabo Convention serving as a key instrument for data protection laws. However, audience members raised concerns about the challenge of aligning national interests with the continental vision on data governance. Paul Baker advocated for a practical approach to data localisation that doesn’t stifle business, highlighting the complex balance required in policy development.

2. Challenges in Data Governance

Several key challenges in data governance across Africa were identified:

– Lack of comprehensive legal frameworks in some countries

– Limited institutional capacity for implementing and enforcing data policies

– Inadequate data infrastructure and hosting capacity

– Cybersecurity threats

– Digital divide between urban and rural areas

– Data fragmentation at country and regional levels

3. Data Infrastructure and Localisation

The discussion revealed significant infrastructure challenges facing many African countries. Souhila Amazouz mentioned plans for regional data centres to improve infrastructure, demonstrating efforts to address this issue at a continental level.

The concept of data localisation sparked debate among the panellists. Thelma Quaye emphasised the need to balance data localisation with enabling cross-border data flows. Vincent Olatunji argued that full data localisation is not practical and called for data categorisation and classification. Paul Baker cautioned that data localisation policies can raise costs for businesses, highlighting the economic implications of such measures.

4. Cross-Border Data Flows and Trade

The importance of cross-border data flows, particularly in the context of implementing the African Continental Free Trade Area (AfCFTA), was a recurring theme. The moderator emphasised the crucial role of cross-border data flows in realising the AfCFTA’s objectives. Thelma Quaye stressed the need to align data protection laws with AfCFTA objectives to avoid potential conflicts.

Paul Baker highlighted the importance of cross-border data flows for Micro, Small and Medium Enterprises (MSMEs) and trade. However, an audience member noted the current low demand for cross-border data flows due to manual systems still in use in many countries, pointing to the need for further digitalisation efforts.

5. Capacity Building and Implementation

The discussion underscored the critical need for capacity building in data governance across Africa. Souhila Amazouz emphasised the importance of building capacity among member states on data governance issues. The AUC is implementing the AU Data Policy Framework through:

– Organizing capacity building workshops

– Providing technical assistance to member states

– Working with the network of African data protection authorities

Vincent Olatunji stressed the importance of empowering data protection authorities to effectively implement and enforce data governance policies. The moderator stressed the importance of a multi-stakeholder and multi-sectoral approach to address these complex challenges effectively.

6. Trust and Collaboration

Speakers highlighted the importance of building trust between governments and businesses regarding data sharing. This includes ensuring transparency in data collection and use, as well as involving all stakeholders in policy development processes.

Key Takeaways and Unresolved Issues

The discussion yielded several key takeaways, including the importance of the AU Data Policy Framework, the need for harmonised policies that consider national interests, the crucial role of cross-border data flows for trade, and the necessity of capacity building in data governance.

Unresolved issues included finding the right balance between data localisation requirements and cross-border data flows, addressing the digital divide and infrastructure gaps across African countries, ensuring data quality and avoiding biases, aligning data protection laws with AfCFTA objectives, and financing data infrastructure and governance initiatives from both public and private sectors.

Future Considerations

Follow-up questions raised by participants highlighted additional areas for future consideration, including balancing data localisation with the promotion of African-owned platforms, managing data quality and preventing fake data, implementing the AU Data Policy Framework beyond policy domestication, addressing data fragmentation at the country level, avoiding data biases that perpetuate inequalities, and ensuring transparency in AI algorithms used for data processing.

In conclusion, the discussion revealed the complex challenges facing Africa in developing effective data governance strategies. While there was broad agreement on the need for harmonised policies and improved infrastructure, the path to implementation remains fraught with obstacles. The conversation underscored the importance of continued dialogue, capacity building, and collaborative efforts to navigate the intricate landscape of data governance in Africa’s digital transformation journey.

Session Transcript

Moderator: Good morning, and sorry for that, we are going to start late this session. And let me check if we have all online, our online speaker. Shwela, are you online? Okay, you are here. And also, Vincent, Dr. Vincent. Yes, morning. Morning, morning, good morning, and sorry for this delay. Before we start now this session, I think it is a very important session why you are here today. When you discuss about data sharing, governance of data in the continent is a very important way. You know, the digital market is estimated to $180 billion by 2025, a lot of coming from data. And we have seen since the development of AI, data can provide a lot of opportunity in the continent, improve the economic growth for data, the growth of economy. Also, the issue of reliability is very important for data in the continent. But we can face some challenges in the African continent to take opportunity of this data generation. The key challenge are the issue of policy. Because several countries, all countries have several policies. We need to harmonize this policy at the continental level. And why AUC come with this AU data framework strategy? To guide the African data market. It is a very important strategy at the continental level. Second, we have issue of infrastructure. Digital infrastructure is a key challenge for data development in the continent. Especially if you want to discuss about… And we have a digital gap, very big digital gap in the continent. And if you look at the world level, among the 20 countries, we have a weak digital skills, 12 are from Africa. Another point, challenge very important for the African continent, it is a regional collaboration. We lack on regional collaboration. We have several institutions working together, working in the continent, and also across the member states. And for that, we need this collaboration between member states. It’s very, very important. And at the world level, why we put this data governance working group, after the adoption of the Global Digital Compact. And in the Global Digital Compact, you have among the five objectives. One is to create a global digital partnership, and the other one is to create a global collaboration. And the other one is a very important is the governance of data. Why our discussion today, it will be, we’ll provide some key guidance for African country on how we can take benefits of this data generation. As we know, Africa will represent 40% of our youth population will be in the next generation, and we need to make sure that we have a good governance for this new generation, and also to create welfare for this new generation, and that I can help, of course, if we make in place a good governance of this data, as a continental level and at the world level. And today, I have distinguished speaker to talk about this. I have a very good panelist. I have a panelist, my distinguished panelist. I have Mrs. Suela from AUC, my sister, Dr. Vincent online, my other sister, Lilian, and my brother, Paul. I think all are here. We have a sister and brother, I think we are going to have a good discussion today. And I’m not going to ask you some difficult question. Now, let me start by Suela. As you know, we talk about a lot on harmonization of data policy at the continental level. Why AUC, in partnership with a key institution, developed this AU strategy data framework for the continent? Could you tell us the objective of this framework? And where we are now with the implementation of this framework at the continental level? Over to you. You have three minutes.

Souhila Amazouz: Thank you. Good morning. Do you hear me? Yes, yes. Yes, good morning, everybody. And thank you, moderator, for this important question. Indeed, the adoption of the AU data policy framework marks the political shows the political will of African countries to effectively use data to support digital transformation and also support the development agenda of the continent. So the framework aims to maximize data access and also data flows across the continent. We consider data as a strategic asset and valuable resources. And the framework, by its development, as you mentioned, it was comprehensive, forward-looking, and with participation of all stakeholders, it was participatory approach, considering the importance of data and also the multidimensional of data that requires participation and involvement of key stakeholders. So the framework itself, it put forward a vision for the continent on how to use data and also how to manage and regulate data that we can benefit from these resources, while at the same time putting the necessary safeguards to protect people and also to protect economies from the misuse of these resources. So, the principles guiding the data policy framework, they are around transparency, accountability, equity, and also cooperation, as cooperation is very important to ensure the development of a shared data ecosystem across the continent, and also facilitate data to flow within countries and also between countries. So, the framework could identify the strategic frameworks of the strategic priorities of the continent, namely, developing the necessary capacities and also at the same time, strengthening and harmonizing data governance across the continent, reaching a high level of convergence of data laws and regulations, and this in line with the Malibu Convention to ensure same level of protection of data across the continent, and also it aims to build the capacities of member states to develop their national data systems, to develop internal national data policies, and to consider data, as I said, strategic asset, to work beyond personal data, but to work data as like resources that is needed for development of digital economy, and also a resource that is needed for the development of artificial intelligence and other emerging technologies. So, the framework provide guidance to member states on how to organize the data governance ecosystem at national level, and also how to create value from data and to enable the creation of data market that will support that they will be integral part of the digital single market. When it comes to harmonization and what we aim to achieve and where are we from the adoption of the framework since 2022, so we went through organizing kind of capacity building to explain and to have the same understanding because we know one of the challenges of the fragmentation of data governance across the world. is like there is not like common understanding of the data governance. There is no, there is different approaches on data governance. From our side, following the adoption of the framework, we organized several workshops, both at national and regional levels, like for us to reach the same level of understanding in line with data policy framework, and also to lay the foundation for the development of integrated and also anti-operable data systems across the continent. The challenges when it comes to the harmonization of data policies or data governance across the continent is the fact that so far, the focus mainly is on personal data. We have only few countries that develop international data policies, and they are working on data governance, but the work is ongoing as we are providing technical assistance program with support of our partner, GIZ, and also as part of the Data Governance for Africa initiative, we are providing technical assistance to member states to develop their national data systems. There is also the issue of data storage capacity that also may prevent or may be an obstacle for data flows across the continent. And we also identify the need to empower and to incapacitate data protection authorities or commission, because like now the mandate goes beyond managing the personal data, but they need to be empowered and also they need to be incapacitate and also to enhance cooperation between the different data protection authorities across the continent. We are working with the network of African data protection authorities. And also, as you mentioned, we are working with all organizations and stakeholders. We are, as part of the program, we are also trying to build the capacity of regional economic communities that they can facilitate and also develop the necessary frameworks and the mechanism at the regional level. At continental level, as part of the implementation, because the… The EU Data Policy Framework provided guidance on how to facilitate and to enable data flows across the continent. As part of the recommendations, we are working on developing a mechanism to facilitate data flows across the continent and also to develop a kind of data categorization and sharing framework, and also to work on open data because, as you mentioned, we need to make data available for artificial intelligence and the two are linked. And also, at the same time, we need to work on incapacitating our member states, both at policy level, policy makers and also technical levels, but also to enable Africa to participate in the ongoing discussion on shaping the data governance across the world and this in line with the Global Digital Compact. It is what I can say, but I will be happy to answer any questions.

Moderator: Thank you very much. You highlighted several issues that are very important, but we are guided by two key frameworks in the continent. The first is the EU Digital Transformation 2020-2030, as well as the African Free Trade Area Continental Strategy. How will this data governance be fit in these two frameworks?

Souhila Amazouz: Thank you for this follow-up question. The Digital Transformation Strategy is the overall framework that is guiding the digital agenda of the continent and the data policy framework is aligned as part of the implementation of the Digital Transformation Strategy. We developed several strategic frameworks, including the data policy framework that addresses the aspects related to data, but the overall objective is to accelerate an inclusive and sustainable digital transformation across the continent.

Moderator: Okay, thank you very much. Let me come now to my sister Thelma. In the EU strategies, there is a lot of issue regarding to the… cross-border data flow. As Smart Africa, you cover around, I think, 37 member states now, 41, good progress, 41 member states covered by Smart Africa. And what role can regional organizations such as Smart Africa can play in addressing this challenge?

Thelma Quaye: Thank you very much. Can you hear me? Souhila Amazouz: has mentioned. And the first one she mentioned was on harmonization. And one key thing that the regional organizations can and should do is to move in the same direction when it comes to the regulations and the policies. And that’s why it’s very key that we are all aligned to the Africa Union data framework. You would find that a lot of regional economic communities, you know, like ECOWAS, is trying to create a regional harmonized data framework. But it’s important that whatever work is being done in ECOWAS, whatever is being done in SADC, whatever is being done in EAC, is aligned to the, you know, the Pan-African framework. That is the only way we can move in one direction with regards to the policies that are fragmented and the rest. Now, the other thing is the infrastructure. About two years ago, Smart Africa did a listening tour. We went around our countries to see the state of data infrastructure. And what we found out was that there was a lack of trust between governments and businesses. Governments think that most of these businesses are in to harvest data for their own benefit. And so what a regional organization like Smart Africa can do is to foster or, you know, start that sort of trust currency, create that trust currency by ensuring that countries, beyond the regulations and policies, countries have the same understanding of what is a security threshold, for instance. What do I mean by that? Let’s say if Rwanda and Uganda needed to share data, they need to understand that they have the same policies or the same understanding of security. They have the same tolerance of data privacy so that they are able to create that trust. But we also need to look at it from an economic point of view, meaning when we are creating these regulations and policies, we need to think of it like, let’s say if an MTN came to Africa, would it be economically attractive for an MTN to deal with 50 plus policies? No. And that’s why we also were looking at doing, setting up regional data centers where we try and bring, we put cloud technology on different data centers across different countries so that we aggregate that space. Because of the trust, by the way, that I started with, these governments, everybody is trying, every government is trying to create a data center. But those data centers have very low utilization. That’s a lot of costs that we don’t need to spend. And so to encourage that trust is, from the infrastructure point of view, is to build this cloud technology on top of the various data centers across, we chose four countries, where these countries would sort of share. share a similar infrastructure, leveraging cloud technology, and thereby attracting multinationals to come in. And then the government also, in addition to making it easier for the multinationals, the government also have that confidence that they know where their data is and their data is not being extracted. One more thing that I want to add is that as regional organizations, beyond supporting countries, beyond creating frameworks and regulations and policies and all these data centers, we also need to be aligned amongst ourselves. We need to understand that there’s only one Africa. And we need to understand that the only reason why we exist as regional organizations is the African citizen. And so there needs to be what I call co-petition. We need to cooperate. We need to work together. We need to harmonize whatever it is that we are doing so that there’s no duplication of efforts. You realize that a lot of organizations are doing the same thing, which we could be more efficient if we came together, to even if we are doing the same things, to split the regions, for instance, or to share the responsibilities within the value chain. I think that’s something that we, as regional organizations, also need to do. In terms of the technical things or the technical implementations, it’s not lost on us. It’s the how. So if we’re able to coordinate properly, we will be able to move forward. Thank you.

Moderator: Thank you very much. All regional institutions should work together to avoid duplication for the interest of the continent. When we come to implement this AU Digital Transformation Strategy, now we have a picture of the regional… on the continental level with AUC and Smart Africa, they are also as an organization working on data, and we need to harmonize all this effort to come up with one strategy for the continent. Now, let’s see how we can implement now at the national level. The challenge faced by the country, we can get some example from Dr. Vaison, how do national data policy impact economic growth, trade, innovation, and digital transformation? As you know, data is a key tool to promote innovation in the continent. Dr. Vainsent, you have the floor.

Vincent Olatunji: Thank you very much. Good morning, and thank you for inviting me to be part of this panel. I think what we’re talking about now is really important, but more importantly that nearly everything we do in the digital world now depends on data. As we say, data is a new world, it’s the life and blood of human beings. So everything we do now depends on data. So it is really important for us as a continent to put in place appropriate strategies to harmonize data and have a continental approach to whatever we want to do in the area of the usage of data. However, we must restrain from just using data loosely. We need to properly identify the type of data we are talking about. Is it for planning? Is it for policies? Or is it for the areas of privacy or protection, or the dignity of human beings, or the rights of citizens, or their freedom or their interests? These are things that we need to take into proper context. Because if we are talking about personal data, we’ve really done a lot in Africa. For instance, out of 34 countries, 37 currently have their laws. I mean, 47 currently have their laws, and about 17 still thinking of what to do in the area of developing appropriate laws to ensure that there’s protection of privacy of individuals. Now, looking at the way we live and work globally now, there is virtually anything we can achieve in terms of economic growth, in terms of development, in terms of trade. Without proper harmonization of data, without putting in place appropriate policies to address such data, and without putting in place appropriate implementation framework and institutional framework to implement the data. Because it’s one thing for you to have the data. It’s a completely different thing for you to be able to properly administer it and make it work for you in a way that can add value to your economy, that can lead to economic growth. Now, for us to really tap from the benefit of data in the area of economic growth, we need to put in place appropriate policies. And I need to commend the African Union with what they have been doing with the African Union data framework. And also, in the area of personal data, the Malabo Convention, which actually led to, which was actually a turning point for instituting data protection laws across the continent. And as I said before, that now a lot of countries have their personal data, which allows us to speak to confidence and trust in their economies. That people, foreigners, foreign investors coming to Africa to invest, they know that, yes, when they share their data with you, the appropriate policies, appropriate laws, appropriate institutions that guarantee that, yes, whatever information they’re sharing with you is safe and secure. To that extent, that actually has led to growth in our economy. I swear, those countries that have existing laws. When you talk of trade, we are talking of Africa-continental free trade agreements, which is, the whole idea is only about having a cobbled, unified, digital. trade regime in Africa. This cannot happen without data, because this will definitely lead to e-commerce. I agree to cross-border transfer of data from one country to another to enhance trade and commerce. And if we don’t really have that, that’s where we can make progress. And with what has been done under AFRICTA, I’m sure by the time we really come together for proper implementation, I’m sure we will need to benefit from the inherent potentials in data modernization and proper implementation of data policies across the continent. In the area of innovation, you look at what is happening in the start-up ecosystem in Africa. They are really growing at a really unprecedented rate. Yes, it’s too slow as compared to other countries. For instance, the number of unicorns we have in Africa, they celebrate seven. As compared to the likes of the US, that’s seven over 600, India over 100. So we still have a lot of work to do. And the only way we can do, the only thing we can do is to enhance that we have appropriate policies that can drive data. More importantly, data of emerging technologies. We are talking of blockchain, talking of artificial intelligence, talking of Internet of Things, talking of big data and so on. All this is, what will drive them is data. So if we don’t have appropriate framework to do this, we won’t be able to achieve anything. However, the story is getting better. It’s better than what it used to be. It’s saying that the whole world, let me say, is really focusing on the start-up ecosystem in Africa to support them in terms of funding, in terms of support, in terms of a lot of strategies that we put in place to enhance innovation. That is why the digital economy in Africa is now growing very fast and we are trying to, we are really benefiting a lot from it. Now going to digital transformation. For any government to be fully transformed or to be fully digitalized, we can’t achieve it without appropriate data. that will speak to who we are, where we are, and where we want to get to. Then what kind of technological measures can we put in place to be able to achieve the vision that we have set for ourselves? This cannot happen without appropriate policies. This cannot happen without appropriate frameworks. This cannot happen without appropriate guidelines. Now you talk of EAs, you talk of education, so you can sit in the comfort of your room. We wanted to look at what happened with COVID. A lot of schools, they went under during COVID. And I keep telling people that, yes, we still despite all the challenges that came with COVID, there are still some areas that we say, OK, I actually encourage that. Because now, we now have a lot of schools deploying digital technologies for education. And even people working from home, the area of where you believe strictly you bring the borders as offices is gone. Now, you can say online, do your trading, trade innovations, and a lot of things are coming up. And I’m sure with all of us working together and putting in place appropriate institutional framework to drive what is happening in the EU, we will try to achieve and derive benefits from data. Thank you.

Moderator: Thank you very much for your intervention. I think you highlight something very important. Without data, we can’t achieve digital transformation. Let me get the view from Lillian in the context of the East African and at the national level in Uganda. Lillian, the floor is to you. Thank you, Mbatha.

Lillian Nalwoga: Good morning, everyone. I think the impact has been partially mentioned and what needs to be done. But what I wanted to throw more light on the digital policies that we are seeing coming up and the issue of data localization, where we are seeing that in several African countries, not just Uganda or East Africa, Rwanda, across the border, we are seeing many countries that are putting in an element of data localization in their data protection laws. And this can have a very negative impact, especially where we are looking at utilizing data we are seeing their economic costs. It’s most likely to increase the cost of doing business. As my colleague here from Smart Africa mentioned, the issue of setting up data centers. So if you’re looking at countries, say in Rwanda, which I think has a clause in their data protection law on data localization, pushing for having individual country data center, you come to Uganda, we also want to have our data locally hosted. It’s going to increase the cost of doing business. And if we are looking at promoting or advancing digital transformation, you’ll find that other aspects are not going to be, it will hinder cross data flow across the different countries. So that is one of the other negative impacts. But also what we are seeing that among the policies and the possible impact of pushing for data localization is the issue of privacy undermining. I think the previous speaker mentioned about a few countries about I think 47 having data protection laws. Some of these are still at in fact implementation. So when you push for things like policies that are advancing localization of data, you’re going to cut out a few of these other countries. And also this is going to impact the other continental level as pushing for the Africa Union Data Policy. You’ll find that you have a few countries moving forward and others being constant. We conducted, my organization, I work with CPSR, we’ve been documenting impacts of, we did an analysis of the AU data policy. We also did a several, an analysis on which way for data localization. And we are finding that where this is happening, there’s really a negative impact in terms of privacy violations. We have a few countries that much as they have the data protection laws, they are still struggling to have data protection offices set up and they are very much in the infant stage. I think the most active ones we are seeing is Ghana, South Africa, Uganda, Kenya just recently. There’s still a lot that needs to be done when we are talking about promoting, utilizing data policies and actual implementation. So I think I can say that at the implementation level. So probably I’ll come into the recommendations later.

Moderator: Thank you very much for that. Let me go, because we are running out of time now. Africa is a specific continent. We have 90% of our economy informal. And we listen some challenge on data localization, how data localization policy affects small business in Africa in view in the implementation is the African free trade area. Thank you.

Paul Baker: Thank you, Moctar for the question. So just to say that, yes, data localization policies have their benefits in terms of trying to promote more confidence in the security of personal data and privacy requirements. But we also have to consider indeed the implementation of these policies and actually is data more secure in your own jurisdiction than it is in other jurisdictions. So, you know, the consideration of really what is the feasibility of enforcing strong data policies in different countries is questionable anyway. For MSMEs, this is obviously it means raising the cost of accessing services that might be cheaper elsewhere. That’s the whole point of international trade is to be able to access a more variety of choices, but also potentially more sophisticated choices. Some that are less subject to cyber threats. It’s not clear yet whether many countries actually have the systems in place to be able to combat these kind of risks and threats. And, you know, for businesses, we have to understand that it’s true that there are opportunities for local businesses to develop cloud computing services. The commercial value and the returns on cloud storage is quite low in general. This is not where the main benefits come from. It comes from more the analytical tools that can be juxtaposed on these cloud computing centers. And so, are we also having a strategy to try and develop those kind of services? And I don’t believe at the moment that there is that foresight. We’re looking very much at the more the infrastructural development and looking at the actual cloud storage facilities rather than looking at the analytical tools. And again, on the analytical tools, if we look at the technology and sophistication of some tools that are available globally, it’s going to be very hard to match those. And so, accessing those for MSMEs is also considered critical. The AFCFTA digital trade protocol, of course, does try and promote cross-border transfers of data. They do not encourage data localization. And it’s trying to promote that free flow of trade. There is the moratorium on any types of duties on electronic transactions as well amongst the members of the AFCFTA. So, we see that trade agreements can advance, I think, more free flow of trade. Generally, the trade agreements are not prescriptive in terms of what should be the national regulations, but it’s just the principles and the boundaries of what those national legislations should contain. So, for example, the right of data subjects to be able to be forgotten or the consent for being registered on different systems. These are normally incorporated as provisions in a trade agreement, but they don’t tell you how you must do it as long as you’ve got a system in place that we have confidence in. That’s normally sufficient to be able to meet the standards of the trade agreements. And the same will be with the AFCFTA. So, the AFCFTA sets those general framework requirements, just as the African Union Data Harmonization Strategy also has. But it doesn’t actually incorporate the requirements of what each country must do. And just one last thing is that, of course, trade agreements promote equivalence and recognition of different standards. And that’s quite critical for businesses to be able to access other markets as well.

Moderator: Thank you very much. I think it’s a very important point you raised. We need to align with this digital trade protocol, this AU digital framework, as well as the AFCFTA when we would like to implement sound data policy at the national level. Let me go now to open the floor to the audience. I think it is a very important discussion we have now, and we learned a lot from our distinguished panelists. And data also, it is for everywhere, every day. We are using data day and night. Now, let me… Okay, you introduce yourself. Where are you coming from? Name and your institution. Let me start by this gentleman. Do you have a mobile here? Microphone, please. Okay.

Audience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love the discussions about data. Let me start with this. And my observation is most of the platforms where we actually send our data as Africa are not owned or hosted in Africa, which raises then the question for me as we are talking about data policies from Africa when most of the data that we use, most of the platforms we use are actually not hosted in Africa. How do we balance the access aspect so that we create more room for data localization as well as promote increased access so that most of the infrastructure that we are talking about to allow for data to be localized are actually hosted in Africa? But also the second aspect of my question is how do we create an aspect where most of the platforms we use are developed and owned in Africa so that it makes more sense to host most of the data that we are pushing for localization to be actually owned and hosted within Africa? Thank you. Thank you. Thank you. This is Abdulmanam Ghalila working for the Telecom Regulator of Egypt. Actually, I like the words said by Suhaila about the policies and the regulation of the data. So the question is here. How could we? manage the data to be a good one rather than be a fake one. I think it will be works for the local community, for the local country, not for the regional organization or regional cooperation between countries. This is the first equation. Second equation is that, how could we change the mind, change the wheel to be from having the data to be secured locally on the continent? Third equation about, do we have an example of the regional cooperation regardless of the data? Thank you. Hello, good morning. My name is Sorina Safa from UNEKA. My question is, hello, Yes, please go ahead. Okay, my question is regarding the EU Data Policy Framework compared to the European Data Policy Framework. What did we learn? The European is the binding one and the African Union is a voluntary base. And as Lillian highlighted, having a policy or a guideline is a good one, but to make it a reality, there are so many things need to follow up. The African Union policy focus solely, mostly on localization and data sovereignty. As you can see, Levi has highlighted the digital divide, but also if we cannot even have a stable electricity grid in Africa, how do we make sure that this aspiration that we have become a reality? So harmonization is one thing, but cross-sectoral or cross collaboration is necessary. So my focus or my question to AUC Suheila is what is the implementation plan other than domesticating the EU Data Policy Framework on its own as a policy framework? Thank you. First of all, thank you so much for this particular session. My name is Dr. Martin Koyabe, GFC Africa. I’ve got two parts to my submission here. One is a prescription, and the other one is probably a question for the panel. What we see here, in my opinion, is what we call the mixed messaging in some levels. On one hand, if I’m seated here as a commissioner for data for a country, my responsibility will be to protect the data of my country as a sovereign entity. But then I’m being told, no, you’ve got to open your data so that we can be able to make sure that you do trade. And then someone also is saying, we don’t have the data in my country, it is hosted somewhere else. So what we need to do probably would be a layered harmonization, where if we talk of infrastructure, do we have it in terms of data centers? If we talk of the issues of implementation, do we have the top level domain names being used effectively? If we talk of awareness, are the levels that are supposed to be aware of what needs to be done? So I think for me, the messaging should be very clear in terms of harmonization. And then the second issue is also to look at the issues of infrastructure. If we talk about infrastructure and how we want to share infrastructure, are we building the infrastructure that is there to be able to deal with what we are talking about in terms of data harmonization or data localization at the national level? Thank you. Good morning. Hello. Are you hearing me? Yes, there is. Okay, thank you, Magdar. I am Dereje Johannes from UNEKA. So, my question is on the data governance challenges in Africa. Actually, it’s good to talk about data governance and its challenges in Africa. So, the issues like lack of comprehensive legal framework, limited institutional capacity, inadequate data infrastructure, cybersecurity threats, and digital divide are mentioned as one of the challenges of data governance in Africa. But for me, the most serious and the most, I mean, challenges that we should focus on is data fragmentation. So, my question is, do we really have a harmonized data at the country level, leave alone at the continent level? Because we know that data is often siloed across, you know, various institutions, making it difficult to integrate and utilize for decision making. So, we need to just, you know, bring even an institution in a single country to have a kind of really the same format so that we can use that data appropriately. So, the question is, do we really have that sort of, I mean, data harmonization in the country? And the second question is for Zohila, do the AU data governance, I mean, considers this issue? Thank you. Yes. So, I just wanted to chip in and say that we recently worked on a project as well on cross-border data flows in Africa. And we’re mainly looking at the inconsistency between the continental vision and what is happening at national level. So we looked at Nigeria, Senegal, and Mozambique as a case study. And one of our findings, of course, I think some of the panelists have already mentioned it, is that there is always that clash between national interests versus continental vision and the broader continental vision. And we need to have more discussion around how best can we align what is happening in our national governments and what is happening at the AU level. And I guess something that I also want to add to the conversation, which wasn’t mentioned, was the issue of data deficiencies and especially the low demand for cross-border data flows. So when we were doing this research, which actually found that a lot of African governments are operating manually and they’re using outdated systems and have unreliable data. And because of that, there is always that challenge of low demand for use of that data. And that also impact on how African countries and businesses are approaching this question of cross-border data sharing as well. And then just to add on as well, we already have a couple of African countries that have already started adopting and implementing the AU data policy framework. So there is room there for peer-to-peer learning from these African countries that are leading on national data strategies and they can share their experiences with others as well. So I’ll stop there. Thank you. My name is Baratang Mia from Galhype Women Who Code. So my question is based on what Melody said. And it’s like, how do we avoid selective data implications? Because at the moment we are perpetuating inequalities that are created by. data that’s very biased. At the moment, the people who have access to Internet are the same people who are holding the resources. Everything that they have, it’s excluding the marginalized. So how do we make sure that at the moment, the data that’s online that we have is not perpetuating the inequalities? Thank you. It is actually a comment rather than a question. To my colleague said here. Actually, he combined both the data and digital transformation projects. So in order to have communication between data and digital transformation project, there should be some kind of AI that processes data and have some decision or some outputs that could be processed by digital transformation project decision maker. Actually, how could I trust the books of AI or software of AI, who depends on some input and have some output, and I don’t know what are the algorithms used in order to extract this output from this input. This is one of the challenges. If we could overcome these challenges, I think we will go better for the data for Africa. Thank you.

Souhila Amazouz: Thank you. Now, let me, if there is no question in the room, I will go back to the panelists. There are a lot of questions for AUC. Let me start by Suela. Thank you. Do you hear me? Yes, I think maybe the majority of the questions, they are related to the data infrastructure and the data localization capacity of Africa as a continent, and also the capacity. or at national hosting capacity at national level. I think this is one of our challenges that is one of the common challenges for African countries. But just to say that we are aware about like the weaknesses and the deficiencies that we need to address and there is work in progress. So we cannot wait until we have everything in place that we can start talking about how to manage, regulate and effectively use the data that is being generated across the continent. So for instance, when it’s related to regional data centers, there is already three regional projects that have been selected by regional economy communities, like they are projects that are adopted, the whole region, they agreed on them to be regional data centers. And now there is the African Union Development Agency, which is the AODANEPAD is moving towards the implementation. And from the discussion that we had recently as part of the development of the continental strategy on artificial intelligence, there is a recommendation to accelerate the implementation of these three projects and to see if the possibility to move towards additional regional projects at national level, as part of the Data Governance for Africa initiative, which is an AUEU program, there is a part about supporting member states to develop a kind of project proposals on national data centers on business models. And there is more about project preparation at the kind of accompanying member states to identify or to prepare national projects. I think having national data centers doesn’t prevent data to flow across the countries. As Thelma mentioned, it is about putting in place the policies and the regulation that will build that trust and confidence among countries and also among different stakeholders that we can enable data to flow across borders. And as Paul mentioned, like it’s not all data that we need to enable to flow across the continent, but we need to do kind of classification and to identify the level of security and privacy of each type of data and having represent countries to agree on the minimum requirements to facilitate data flows. And for this, we are establishing a committee that where we will have all representatives of from EU member states and also regional organizations that they will discuss. It will be kind of open discussion, like they can agree on the minimum requirements and also the best way to facilitate data exchanges and data cooperation across the continent. On the comparison between the African Union and the European Union policy framework, I think the approach is different. We cannot also compare because the context is different as it was mentioned, even within Africa, we have some countries that are somehow advanced in the implementation of the EU data policy framework on developing the necessary mechanism at national level to manage data. And we have at the same time, some countries that are at early stage of data and digital readiness. And for us as regional and continental organizations, we need to work with all countries and to find a common ground, like to facilitate collaboration among countries. So the EU data policy framework is a continental policy, is the continental approach. It includes several recommendations that both at national level, at regional and continental level, it provides guidance where we need like the direction. And now it is, as we move to the implementation, we are doing beyond what is… included in the data policy framework, as for instance, on national data governance, and involving all national actors, we are supporting countries to have national dialogue on data governance, we with all stakeholders, they have this discussion, it don’t define the key stakeholders, it don’t define how they organize themselves to develop their national data governance ecosystem. The work is ongoing. So far, we progressed very well with Zambia, they are about to finalize their national data policy framework, and also Smart Africa supported two additional countries, Senegal and Ghana. And we are working with countries to create that conversation at national level, and facilitate the discussion around data as an asset beyond personal data, but data as a resource that needs to put at the disposal of the key stakeholders and also to be used as development of digital economy and support the digital trade. And also, as part of the support, we developed guidelines on how to, to facilitate to include data as in the digital trade agreements. And we are working towards supporting member states to develop their national data capacities on the deficiencies in the laws and there is all this I think they are already identified also in our assessments. And as I said, like the work is ongoing, there is like two thirds of countries that have in place their national data protection legislation, some of them they are outdated, many countries they are in the process of reviewing them. There are like less than 50% of countries they have in place their national data authority or commission, and some of them they are not operational yet. And there is support, as I said, like we are we are providing support. to incapacitate and empower this data authorization. And on the last question about how to avoid biases and how to make data available for Africa-driven development, we, it is a collective work. It is collective efforts. Like once we put in place all these necessary mechanisms and also we develop the capacities at national level. From our side, there’s African Union and the regional organization. We aim to create platforms to facilitate collaboration among member states and to create the conditions for data to flow and also to support the digital trade in line with the AFCFDA objectives. So our work is to find like the common objective and common interest of African countries. The work is ongoing. It is a lot of work, but we are optimistic. As we say that many countries, they already started and they are already advanced in the implementation of the AU data policy framework. Thank you.

Thelma Quaye: Thank you, Kenyam. So I’ll try to answer one of the questions on how to balance the fact that most of the platforms we use are not owned in Africa and we are talking about data localization. I wish I had a board here. But how I see it is that we need to segregate the data because we are not, you know, when we talk about data governance, we are not trying to be protectionist or we are not trying to, you know, create an island of Africa. There should be, so on one side, there should be national level localization. Then there’s regional level localization. Then there’s continental level localization. Then across this, we have confidential, private, and public. So once we are able to categorize our data across these metrics, we will then know how we can, you know, keep what we have to keep in the countries. If it’s critical and it’s of confidential nature, or if it’s something that is of jurisdictional matter. Then we go to the next level. For me, the cross-cutting is between where we need to let go or where we need to open up our borders to the next countries is what is going to impact or what will help facilitate the AFCT. At the moment, our data protection laws are in direct contraction with the AFCT. We are not aligned. AFCT is in their own world and data protection authorities are in their own world. So once we have that matrix, we will now know which box of data we can, you know, let go to facilitate trade. on trade, because Rwanda is 12 million, and a business sets up and cannot thrive because 12 million is too little. Meanwhile, if we tweaked that policy so that if we segregated our data and knew what exactly we can let go, we can open up a 1.4 billion market to the rest of Africa. Then the third part would be what we can let go outside Africa, because we don’t have the platforms now. We have not had a lot of unicorns, as my colleague from Nigeria had mentioned. So until we get there, we need to be able to understand that we need to find that fine balance of which sort of data we’ll let go, which sort of data we need to keep, and which sort of data we need to also facilitate so it helps our trade. From the Smart Africa perspective, we encourage data localization within an African context, not just a national context. Data localization within an African context now brings into the question, how do we leverage cloud, how do we leverage the existing infrastructure within Africa, that is, the existing data centers in Africa, to be able to localize data in Africa, share trust amongst ourselves, and help Africa so that we retain our data as much as we can within Africa, because it will lower costs, it will increase efficiency, it will also increase the quality of service that people receive. Thank you.

Vincent Olatunji: Yeah, so I’ll say that I just want to add one or two words to issues around data localization. And the basic question I want us to ask ourselves is, can we actually practice full data localization? The answer is no. You want to sit in your room, you buy goods and services from organizations in the US, in Europe, in Asia, and you exchange your data, you exchange your information. Those data you are giving out, are they still local data? The truth is, the world is not a global village. There is no way you can practice full localization. Even when you are saying you want to leverage your cloud, there are several cities or countries. Where are these countries? And as I mentioned by some of the panelists, I think what we need to do is to do data categorization. What categories of data must remain local in our country, no matter what the situation is. What categories of data that we must share. And I think AU should take the lead in this work, so that other African countries can really key to it. In addition to that, somebody has mentioned the benefits that they have in EU countries and what we have here. I think the strength of the EU GDPR, for instance, is the fact that all 27 countries under EU, they are linked to the GDPR. And their population is just about 447 million. Whereas in Africa, our population is 1.4 billion. But in the situation whereby we have different focuses, different visions concerning our laws, we are not aligned what we are doing with the UN Convention on the EU Data Policy Framework. So there should be a very strong institutional control at the level of EU to direct and guide what is happening in Africa as a continent, from which all other countries within the continent can be able to tap into and derive their laws from this. I think that is very, very important for us. Because there is strength in ANOVA, there is power in ANOVA. in Andorra, that whatever we want to do, if we are not united, we will be divided and we will be able to achieve anything in the area of the usage of our data and the policy that we want to put in place. No matter how robust the policies are, if there are no strong continental, internal framework to control or to ensure implementation, we will be able to achieve it. So we need to work on this. Thank you.

Lillian Nalwoga: I need someone in response to any particular question, but let me conclude by saying that we need to be very intentional. Intention, I know we are, but you can’t, okay, great. So we need to be very intentional. My colleague here has mentioned us recognizing data as a national asset. And I think being intentional right now is looking at us, how do we go about with this categorization? So we need to be developing national data strategies. Yes, much as we’ve been pushing for the data protection and all that, but we need to have national data protection, national data strategies. I think I need to be corrected, but I think Ghana just came up with one with support from GIZ. Uganda just has a working draft which was launched sometime last month. And this can be adopted by other countries because then there we shall know which data are we keeping locally, how can we protect this, what kind of investments are we looking at, what kind of support do we need. But also intentional means that we need to be looking for financing. When you categorize that, then you know where you’re going to invest heavily. You’re going to know how you’re going to utilize this data to bring in more income. And you need to know which data, which category of data is going to be looking at promoting more innovation. So intentional, looking at support, funding for both I think the private sector, looking into the private sector support, tapping into that pocket, but also partners and I think GIZ and Smart Africa World Bank and the UNDP are doing that, but also we need to see how we can tap into the private sector to see how they can support this national data governance or data strategies.

Paul Baker: Okay, thank you. Just quickly, I think that we have to be practical, data localization are like stifle businesses. Very simple example, for me to come to this conference I had to send my most personal data to Germany, giving all my passport details. So if we are not going to allow the sharing of very personal information in certain circumstances, that becomes very problematic. So localization strategy is not going to help businesses. How do you enter into a contract with somebody else if you can share it? documents, particularly with due diligence, that are very, very personal on the shareholders of the company, on their where they live, on their passport details. So I think it’s not very practical and we need to think, you know, what we want to achieve and I think the UDG GDPR was quite effective in that sense, that there are certain requirements as to what we’re sharing data with another country to ensure that they are aligned with the principles that you set and indeed we have African principles which we can try and encourage other countries to adopt. They are not normally so unaligned to what GDPR has already done. Many African countries have already adopted a GDPR model in their data protection acts, but they haven’t gained equivalence yet, so where we need to put the focus on here is on mutual recognition, so equivalence, and helping those countries that really are struggling to even, you know, have a data protection agency or move towards that format to try and promote it. I think it’s really not a very practical solution to try and bar the access to data and just one thing, if I may say, is on platform question, which I think is a very important recurring question that happens all the time. There’s three sort of levels that businesses use as channels for selling online. The first one is Facebook and social media, that’s not truly e-commerce, so we can remove those kind of platforms. The second one is national platforms do exist and they are being used for domestic e-commerce and then there’s platforms to trade with other markets and using those platforms we want to use the platforms that give us the greatest opportunities to access markets, so if that happens to be eBay or it happens to be, yeah, Amazon, then that should be the platform that you as a business should be able to use. If you’re now putting total localization requirements, that you must only use those platforms, again it will damage businesses, so we need to think very carefully about these policies.

Moderator: Thank you, we are almost at the end of this session. Let me give you the floor, Paul, for your last word, for one word takeaway for this meeting, one word only. Harmonized approaches? No, one word. Two words, harmonization. Harmonization,

Lillian Nalwoga: Intention.

Thelma Quaye: One, one, one, one. Empowerment, empowerment.

Moderator: Dr. Van Say, one word for takeaway of this session.

Vincent Olatunji: Collaboration. Collaboration.

Souhila Amazouz: A balanced approach. Like, forward-looking, I would say forward-looking. I think I am allowed to have two words since I am the last one. I would say multi-stakeholder and multi-sectoral approach.

Moderator: I think it was a very important discussion. We raised a lot of issues regarding this data sharing, data cross-border. And what we need, we agree, we acknowledge this EU data framework strategy. But as an implementation at the national level, we have to take into consideration the local needs, as well as the other framework at the continental level, and at the world level, like this digital protocol, this EU digital transformation framework, as well as the AFCTA. Also, I think data is a national asset. But we need to contextualize data localization in the Africa context. It is very important. But for that, we need to build the adequate infrastructure. We talk about data center, but there is something the panelists didn’t talk about. It is the issue of energy. If you want to build this infrastructure, we need energy. It’s the data center. And generally, the data center, the owner will build where we have access to energy in Africa. Africa in energy is a big problem. Harmonization also is very important. We need to work together at the continental level to harmonize all policy because we have several policies, you have data protection, you have private data, you have also AI policy. All is focused on data. We need to harmonize all this policy at the continental level. Of course, this regional organization has to work together to better serve the interests of Africa. We have AUC, we have ECA, we have Smart Africa, we have the regional community. We have all to work together in order to provide key assistance to African countries in the data governance. And also, we welcome and acknowledge the support of GIZ because GIZ is supporting very well the implementation of data governance at the continental level. And now, when we go to this four industrial revolution, data is a key and we need to have a good governance of our data. And thank you once again, GIZ, for that. And I think African countries can take benefits of this support by developing their national strategy for data governance and also building capacity of member states because we need to know what’s happened at the continental level, what are the needs for African countries in terms of data. It’s very important before we think to build this national strategy. Thank you so much to all the speakers. I think it was well done. And also, you raised a key issue we can take away from this meeting. But we need also, as you say, this multi-stakeholder is very important. Collaboration also is a key. Harmonization, intention, also empowerment. Thank you very much. Thank you.

S

Souhila Amazouz

Speech speed

135 words per minute

Speech length

2109 words

Speech time

932 seconds

AU Data Policy Framework aims to maximize data access and flows across the continent

Explanation

The AU Data Policy Framework was developed to promote effective use of data for digital transformation and development in Africa. It aims to maximize data access and flows across the continent while protecting people and economies from data misuse.

Evidence

The framework was developed through a participatory approach involving key stakeholders.

Major Discussion Point

Data Governance Frameworks and Policies in Africa

Agreed with

Thelma Quaye

Vincent Olatunji

Agreed on

Need for harmonization of data policies across Africa

Plans for regional data centers to improve infrastructure

Explanation

There are plans for three regional data center projects that have been selected by regional economic communities. These projects aim to improve data infrastructure across the continent.

Evidence

The African Union Development Agency (AUDA-NEPAD) is moving towards implementing these projects.

Major Discussion Point

Data Infrastructure and Localization

T

Thelma Quaye

Speech speed

141 words per minute

Speech length

1233 words

Speech time

523 seconds

Need for harmonization of data policies at continental level to align with AU framework

Explanation

Regional organizations should work towards harmonizing data policies and regulations across Africa. This harmonization should align with the African Union data framework to ensure consistency across the continent.

Evidence

Examples of regional economic communities like ECOWAS creating harmonized data frameworks.

Major Discussion Point

Data Governance Frameworks and Policies in Africa

Agreed with

Souhila Amazouz

Vincent Olatunji

Agreed on

Need for harmonization of data policies across Africa

Need to balance data localization with enabling cross-border data flows

Explanation

There is a need to categorize data and determine which types should be localized and which can flow across borders. This approach aims to facilitate trade while maintaining necessary data protections.

Evidence

Suggestion of a matrix to categorize data across national, regional, and continental levels, as well as by confidentiality levels.

Major Discussion Point

Data Infrastructure and Localization

Agreed with

Vincent Olatunji

Paul Baker

Agreed on

Importance of data categorization and balanced approach to localization

Differed with

Vincent Olatunji

Paul Baker

Differed on

Data localization approach

L

Lillian Nalwoga

Speech speed

142 words per minute

Speech length

778 words

Speech time

328 seconds

Importance of developing national data strategies and policies

Explanation

Countries need to develop comprehensive national data strategies. These strategies should address data categorization, protection, investments, and utilization for economic growth and innovation.

Evidence

Examples of Ghana and Uganda developing national data strategies.

Major Discussion Point

Data Governance Frameworks and Policies in Africa

Need for intentional approach to developing national data strategies

Explanation

Countries should be intentional in developing their national data strategies. This includes categorizing data, determining investment priorities, and identifying ways to utilize data for innovation and economic growth.

Major Discussion Point

Capacity Building and Implementation

V

Vincent Olatunji

Speech speed

167 words per minute

Speech length

1521 words

Speech time

545 seconds

Full data localization not practical; need for data categorization

Explanation

Complete data localization is not feasible in a globalized world. Instead, countries should focus on categorizing data to determine which types must remain local and which can be shared.

Evidence

Example of sharing personal data for international travel.

Major Discussion Point

Data Infrastructure and Localization

Agreed with

Thelma Quaye

Paul Baker

Agreed on

Importance of data categorization and balanced approach to localization

Differed with

Thelma Quaye

Paul Baker

Differed on

Data localization approach

Importance of empowering data protection authorities

Explanation

There is a need to empower and build the capacity of data protection authorities in African countries. This is crucial for effective implementation of data governance policies.

Major Discussion Point

Capacity Building and Implementation

Agreed with

Souhila Amazouz

Thelma Quaye

Agreed on

Need for harmonization of data policies across Africa

P

Paul Baker

Speech speed

163 words per minute

Speech length

996 words

Speech time

365 seconds

Need for practical approach to data localization that doesn’t stifle business

Explanation

Data localization policies should be practical and not hinder business operations. Overly restrictive policies can make it difficult for businesses to operate internationally.

Evidence

Example of sharing personal data for conference attendance.

Major Discussion Point

Data Governance Frameworks and Policies in Africa

Agreed with

Thelma Quaye

Vincent Olatunji

Agreed on

Importance of data categorization and balanced approach to localization

Differed with

Thelma Quaye

Vincent Olatunji

Differed on

Data localization approach

Data localization policies can raise costs for businesses

Explanation

Strict data localization policies can increase costs for businesses, especially MSMEs. This can limit their ability to access international markets and services.

Major Discussion Point

Data Infrastructure and Localization

Importance of cross-border data flows for MSMEs and trade

Explanation

Cross-border data flows are crucial for MSMEs to access international markets and services. Restricting these flows can limit business opportunities and economic growth.

Evidence

Discussion of different levels of e-commerce platforms and their importance for businesses.

Major Discussion Point

Cross-Border Data Flows and Trade

A

Audience

Speech speed

152 words per minute

Speech length

1472 words

Speech time

579 seconds

Lack of data infrastructure and hosting capacity in many African countries

Explanation

Many African countries lack the necessary data infrastructure and hosting capacity. This poses challenges for data localization and management within the continent.

Major Discussion Point

Data Infrastructure and Localization

Challenge of aligning national interests with continental vision on data governance

Explanation

There is a tension between national interests and the broader continental vision for data governance. This makes it challenging to implement harmonized data policies across Africa.

Evidence

Research findings on inconsistencies between continental vision and national-level implementation in Nigeria, Senegal, and Mozambique.

Major Discussion Point

Data Governance Frameworks and Policies in Africa

Low demand for cross-border data flows due to manual systems in many countries

Explanation

Many African governments still operate with manual systems and outdated technology. This results in low demand for cross-border data flows and hinders the adoption of modern data governance practices.

Evidence

Research findings on African governments using manual and outdated systems.

Major Discussion Point

Cross-Border Data Flows and Trade

Challenge of limited institutional capacity for data governance

Explanation

Many African countries face challenges in implementing data governance due to limited institutional capacity. This includes a lack of expertise and resources to effectively manage and regulate data.

Major Discussion Point

Capacity Building and Implementation

M

Moderator

Speech speed

143 words per minute

Speech length

1604 words

Speech time

671 seconds

Importance of multi-stakeholder and multi-sectoral approach

Explanation

A multi-stakeholder and multi-sectoral approach is crucial for effective data governance in Africa. This involves collaboration between various stakeholders and sectors to address the complex challenges of data management and regulation.

Major Discussion Point

Capacity Building and Implementation

Agreements

Agreement Points

Need for harmonization of data policies across Africa

Souhila Amazouz

Thelma Quaye

Vincent Olatunji

AU Data Policy Framework aims to maximize data access and flows across the continent

Need for harmonization of data policies at continental level to align with AU framework

Importance of empowering data protection authorities

Speakers agreed on the importance of harmonizing data policies across Africa, aligning with the AU Data Policy Framework to ensure consistent governance and protection.

Importance of data categorization and balanced approach to localization

Thelma Quaye

Vincent Olatunji

Paul Baker

Need to balance data localization with enabling cross-border data flows

Full data localization not practical; need for data categorization

Need for practical approach to data localization that doesn’t stifle business

Speakers agreed that full data localization is not practical and emphasized the need for a balanced approach that categorizes data and allows necessary cross-border flows while protecting essential data.

Similar Viewpoints

Both speakers emphasized the importance of developing comprehensive national-level strategies and empowering relevant authorities to effectively implement data governance.

Lillian Nalwoga

Vincent Olatunji

Importance of developing national data strategies and policies

Importance of empowering data protection authorities

Unexpected Consensus

Recognition of infrastructure challenges

Souhila Amazouz

Thelma Quaye

Audience

Plans for regional data centers to improve infrastructure

Need to balance data localization with enabling cross-border data flows

Lack of data infrastructure and hosting capacity in many African countries

There was unexpected consensus on the recognition of infrastructure challenges, with both officials and audience members acknowledging the need for improved data centers and hosting capacity across Africa.

Overall Assessment

Summary

The main areas of agreement included the need for harmonized data policies across Africa, a balanced approach to data localization, development of national data strategies, and recognition of infrastructure challenges.

Consensus level

There was a moderate to high level of consensus among speakers on key issues. This consensus suggests a shared understanding of the challenges and potential solutions for data governance in Africa, which could facilitate more coordinated efforts in policy development and implementation across the continent.

Differences

Different Viewpoints

Data localization approach

Thelma Quaye

Vincent Olatunji

Paul Baker

Need to balance data localization with enabling cross-border data flows

Full data localization not practical; need for data categorization

Need for practical approach to data localization that doesn’t stifle business

While all speakers agree that full data localization is not practical, they differ in their approaches. Thelma Quaye emphasizes balancing localization with cross-border flows, Vincent Olatunji focuses on data categorization, and Paul Baker stresses the need for a business-friendly approach.

Unexpected Differences

Focus on data infrastructure vs. policy

Souhila Amazouz

Audience

Plans for regional data centers to improve infrastructure

Lack of data infrastructure and hosting capacity in many African countries

While Souhila Amazouz discusses plans for regional data centers, audience members highlight the current lack of infrastructure. This unexpected difference highlights a potential gap between policy planning and on-the-ground realities in many African countries.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to data localization, the balance between continental and national-level policies, and the prioritization of infrastructure development versus policy implementation.

difference_level

The level of disagreement is moderate. While there is general consensus on the importance of data governance and the need for harmonization, speakers differ in their specific approaches and priorities. These differences reflect the complex challenges of implementing continent-wide data governance in Africa, given the varying levels of infrastructure and policy development across countries. The implications of these disagreements suggest that a flexible, multi-layered approach may be necessary to address the diverse needs and capacities of different African nations while still working towards continental harmonization.

Partial Agreements

Partial Agreements

All speakers agree on the importance of data governance frameworks, but they differ in their focus. Souhila Amazouz emphasizes the continental AU framework, Thelma Quaye stresses harmonization across countries, and Lillian Nalwoga highlights the need for national-level strategies.

Souhila Amazouz

Thelma Quaye

Lillian Nalwoga

AU Data Policy Framework aims to maximize data access and flows across the continent

Need for harmonization of data policies at continental level to align with AU framework

Importance of developing national data strategies and policies

Similar Viewpoints

Both speakers emphasized the importance of developing comprehensive national-level strategies and empowering relevant authorities to effectively implement data governance.

Lillian Nalwoga

Vincent Olatunji

Importance of developing national data strategies and policies

Importance of empowering data protection authorities

Takeaways

Key Takeaways

The AU Data Policy Framework aims to maximize data access and flows across Africa while protecting privacy and security

There is a need to harmonize data policies at the continental level while considering national interests

Data infrastructure and hosting capacity remains a challenge in many African countries

Cross-border data flows are crucial for implementing the AfCFTA and enabling trade

Capacity building on data governance is needed at national and regional levels

A multi-stakeholder and multi-sectoral approach is important for effective data governance in Africa

Resolutions and Action Items

Develop national data strategies aligned with the AU Data Policy Framework

Establish regional data centers to improve infrastructure

Provide technical assistance to member states on developing national data systems

Empower and build capacity of data protection authorities

Create mechanisms to facilitate data flows across the continent

Unresolved Issues

How to balance data localization requirements with the need for cross-border data flows

How to address the digital divide and infrastructure gaps across African countries

How to ensure data quality and avoid biases in data collection and use

How to align data protection laws with AfCFTA objectives

How to finance data infrastructure and governance initiatives

Suggested Compromises

Categorize data to determine what can be localized vs shared across borders

Promote data localization within an African context rather than just national borders

Adopt a balanced approach that considers both data protection and economic growth

Develop equivalence and mutual recognition frameworks for data protection across countries

Thought Provoking Comments

We consider data as a strategic asset and valuable resources. And the framework, by its development, as you mentioned, it was comprehensive, forward-looking, and with participation of all stakeholders, it was participatory approach, considering the importance of data and also the multidimensional of data that requires participation and involvement of key stakeholders.

speaker

Souhila Amazouz

reason

This comment frames data as a strategic asset and emphasizes the importance of a multi-stakeholder approach in developing data governance frameworks. It sets the tone for viewing data policy as a complex, multidimensional issue.

impact

This comment shaped the subsequent discussion by establishing data as a critical resource and highlighting the need for collaborative approaches in policy development. It led to further exploration of stakeholder involvement and comprehensive policy frameworks throughout the conversation.

Beyond supporting countries, beyond creating frameworks and regulations and policies and all these data centers, we also need to be aligned amongst ourselves. We need to understand that there’s only one Africa. And we need to understand that the only reason why we exist as regional organizations is the African citizen.

speaker

Thelma Quaye

reason

This comment introduces the crucial idea of alignment between regional organizations and emphasizes the ultimate beneficiary – the African citizen. It challenges the potential fragmentation of efforts across different organizations.

impact

This comment shifted the discussion towards the importance of coordination between regional bodies and keeping the focus on benefiting African citizens. It prompted further discussion on harmonization and avoiding duplication of efforts.

Without data, we can’t achieve digital transformation.

speaker

Moderator

reason

This succinct statement encapsulates a key insight about the fundamental role of data in digital transformation efforts.

impact

This comment reinforced the central importance of data governance in achieving broader digital transformation goals, influencing subsequent discussions on implementation strategies and challenges.

Data localization policies have their benefits in terms of trying to promote more confidence in the security of personal data and privacy requirements. But we also have to consider indeed the implementation of these policies and actually is data more secure in your own jurisdiction than it is in other jurisdictions.

speaker

Paul Baker

reason

This comment introduces nuance to the discussion of data localization, challenging the assumption that local storage is always more secure and highlighting implementation challenges.

impact

This comment sparked a more critical examination of data localization policies, leading to discussions about balancing security concerns with practical implementation challenges and cross-border data needs.

We need to segregate the data because we are not, you know, when we talk about data governance, we are not trying to be protectionist or we are not trying to, you know, create an island of Africa. There should be, so on one side, there should be national level localization. Then there’s regional level localization. Then there’s continental level localization. Then across this, we have confidential, private, and public.

speaker

Thelma Quaye

reason

This comment introduces a nuanced approach to data localization, proposing a multi-tiered system that balances national, regional, and continental needs while also considering data sensitivity.

impact

This comment deepened the discussion on data localization by proposing a more sophisticated framework. It led to further exploration of how to categorize data and balance various needs and priorities in data governance.

Overall Assessment

These key comments shaped the discussion by establishing data as a strategic asset, emphasizing the need for collaborative and coordinated approaches, highlighting the central role of data in digital transformation, introducing nuance to the debate on data localization, and proposing more sophisticated frameworks for data governance. They collectively moved the conversation from broad principles to more specific implementation challenges and strategies, while consistently emphasizing the need to balance various stakeholder interests and priorities.

Follow-up Questions

How to balance data localization with increased access and promotion of platforms developed and owned in Africa?

speaker

Levi Siansege

explanation

This is important to address the issue of data sovereignty while also promoting African-owned digital infrastructure and services.

How to manage data quality and prevent fake data?

speaker

Abdulmanam Ghalila

explanation

Ensuring data quality is crucial for effective decision-making and policy implementation at local and regional levels.

What is the implementation plan for the AU Data Policy Framework beyond policy domestication?

speaker

Sorina Safa

explanation

Understanding the concrete steps for implementation is essential for turning the policy into actionable results across the continent.

How to address data fragmentation at the country level?

speaker

Dereje Johannes

explanation

Data fragmentation within countries hinders effective data integration and utilization for decision-making, which needs to be addressed before continental harmonization.

How to avoid selective data implications and address data biases that perpetuate inequalities?

speaker

Baratang Mia

explanation

Addressing data biases is crucial to ensure that data-driven policies and initiatives do not exacerbate existing inequalities.

How to ensure transparency and trustworthiness of AI algorithms used in data processing for digital transformation projects?

speaker

Unnamed audience member

explanation

Understanding and trusting the AI algorithms used in data processing is important for building confidence in data-driven decision-making processes.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #32 Shaping an equal digital future with WSIS+20 & Beijing+30

Open Forum #32 Shaping an equal digital future with WSIS+20 & Beijing+30

Session at a Glance

Summary

This discussion focused on gender equality in the context of the Global Digital Compact (GDC) and related digital initiatives. Participants explored how to leverage the GDC to address gender gaps in artificial intelligence, digital access, and technology leadership. They emphasized the importance of mainstreaming gender perspectives in digital strategies and policies.

Key points included the need for gender-responsive AI development, addressing technology-facilitated gender-based violence, and promoting women’s representation in tech industries and STEM education. Speakers highlighted the importance of multi-stakeholder cooperation and partnerships in implementing the GDC’s gender equality principles.

The discussion touched on the digital gender divide, noting that globally there are still 189 million more men than women using the internet. Participants stressed the need for increased investments in digital infrastructure, skills development programs targeting women and girls, and efforts to make online spaces safer for women.

Several speakers emphasized the importance of data collection, particularly gender-disaggregated data, to inform policies and measure progress. The role of digital public infrastructure (DPI) was discussed, with a call for gender-responsive design in DPI solutions.

Looking ahead, participants suggested creating a standalone action line for gender in the World Summit on the Information Society (WSIS) process, integrating digital issues into the Beijing+30 review, and implementing gender-responsive budgeting and procurement in technology initiatives. The discussion concluded by emphasizing the ongoing nature of these efforts and the need for continued collaboration across various UN processes and stakeholder groups.

Keypoints

Major discussion points:

– The Global Digital Compact (GDC) and its incorporation of gender equality principles

– Addressing gender gaps in AI development and governance

– Promoting women’s digital skills, leadership, and representation in tech industries

– Implementing gender-responsive digital public infrastructure (DPI)

– WSIS process and plans for mainstreaming gender in digital initiatives

The overall purpose of the discussion was to examine how gender equality and women’s empowerment are being addressed in global digital policies and initiatives, particularly the Global Digital Compact, and to identify key actions needed to create a more inclusive digital ecosystem.

The tone of the discussion was largely analytical and solution-oriented. Speakers highlighted both progress made and remaining challenges in achieving digital gender equality. There was a sense of urgency in addressing gaps, but also optimism about opportunities to advance gender-responsive approaches through multi-stakeholder collaboration. The tone remained consistent throughout, with all participants contributing constructive ideas and recommendations.

Speakers

– Papa Seck: Chief of Research and Data Section at UN Women

– Helene Molinier: Leads the Action Coalition on Technology and Innovation for Gender Equality at UN Women

– Roy Eriksson: Finland’s Global Gateway Ambassador

– Isabel De Sola: From the UN Secretary General’s Tech Envoy office

– Radka Sibille: Digital affairs advisor at the EU delegation in Geneva

– Hajjar El Haddaoui: Chief of digital economy and foresight at the Digital Cooperation Organization

Additional speakers:

– Nandini Chami: Deputy director of research and policy and engagement at IT4Change

– Tala Debs: WSIS and SDGs project coordinator at ITU

– Caitlin Kraft-Buchman: From Women at the Table and A+ Alliance for Inclusive Algorithms

Full session report

Revised Summary: Gender Equality in the Global Digital Compact and Related Initiatives

This event, organized by UN Women and the International Telecommunication Union (ITU), focused on gender equality in the context of the Global Digital Compact (GDC) and related digital initiatives. The discussion explored how to leverage the GDC to address gender gaps in artificial intelligence, digital access, and technology leadership, emphasizing the importance of mainstreaming gender perspectives in digital strategies and policies.

Introduction and Overview:

Helene Molinier, Director of the Innovation and Technology Facility at UN Women, opened the discussion by highlighting the GDC’s acknowledgment of gender disparities and its inclusion of gender equality as one of its 13 principles. She emphasized the risk of “double exclusion” for women in the digital realm and stressed the need for clear targets and accountability mechanisms for GDC commitments.

EU Perspective:

Radka Sibille, Deputy Head of Division for Digital Technologies at the European External Action Service, affirmed that gender equality is strongly enshrined in the GDC. She highlighted EU initiatives aimed at making online spaces safer for women and emphasized the importance of investing in digital infrastructure and skills programmes for women.

Finland’s Contribution:

Roy Eriksson, representing Finland’s Permanent Mission to the UN, discussed Finland’s commitment to addressing technology-facilitated gender-based violence. He noted the GDC’s commitment to developing methodologies to counter digital violence and highlighted the work of Finland’s Generation Equality Youth Group in promoting gender equality in technology.

World Summit on the Information Society (WSIS) Process:

Tala Debs, Digital Inclusion Specialist at ITU, explained how the WSIS process champions ICTs and gender mainstreaming. She provided specific statistics on the digital gender divide, noting that globally there are still 259 million fewer women than men using the internet. Debs also introduced the WSIS Gender Trendsetters and Repository of Women in Technology initiatives.

Digital Public Infrastructure and AI:

Isabel De Sola, Senior Advisor at the Office of the UN Secretary-General’s Envoy on Technology, discussed the need for partnerships to develop gender-affirming AI, noting that current AI models are likely not gender-responsive due to data biases.

Nandini Chami, Deputy Director of IT for Change, argued that data governance in digital public infrastructure (DPI) must be evaluated through a gender justice lens, emphasizing the importance of public consultations with affected communities in DPI design.

Digital Economy Navigator:

Hajjar El Haddaoui, Chief Impact Officer at the Digital Cooperation Organization, introduced the concept of a Digital Economy Navigator to assess countries’ progress on digital equity. She noted that the GDC offers a framework for cooperation on gender equality and highlighted the organization’s work in promoting digital inclusion.

Recommendations and Action Items:

1. Develop effective methodologies to measure, monitor, and counter digital violence against women.

2. Increase investments in digital infrastructure and skills programmes targeting women and marginalized communities.

3. Integrate gender perspectives into policies and programmes addressing disinformation.

4. Implement the Digital Economy Navigator to assess countries’ progress on digital equity.

5. Support the WSIS Gender Trendsetters and Repository of Women in Technology initiatives.

6. Create a standalone action line for gender in the WSIS process.

7. Integrate digital issues into the Beijing+30 review.

8. Collaborate on a WSIS-Beijing+30 Common Action Plan for bridging the gender digital divide.

Caitlin Craft Bachman, from UN Women, concluded the discussion by suggesting the creation of a standalone action line for gender in the WSIS process and the integration of digital issues into the Beijing+30 review. She also proposed collaborating on a WSIS-Beijing+30 Common Action Plan for bridging the gender digital divide.

Conclusion:

The discussion highlighted the ongoing nature of efforts to achieve digital gender equality and the need for continued collaboration across various UN processes and stakeholder groups. While there was broad agreement on the importance of gender equality in digital technologies, the speakers emphasized the need for concrete actions, clear accountability mechanisms, and multi-stakeholder cooperation to translate the principles of the Global Digital Compact into meaningful change for women and girls worldwide.

Session Transcript

Papa Seck: Greetings, everyone, and welcome to this session. My name is Papa Seck, and I’m the Chief of Research and Data Section at UN Women. In my section, we also have the work on digital technology and innovation, so it’s really a pleasure to host this session this evening. So just quickly, next year, obviously, as we’ve been speaking about, we will celebrate WSIS Plus 20. But it’s also an important year for gender equality, because it is the 30th anniversary of the Beijing Platform for Action. So the issues related to technology and digital will also be front and center of that discussion. At UN Women, we host the Action Coalition on Technology and Innovation, which is part of the Generation Equality Forum, and you will hear about that shortly from a colleague of mine, Helen Molina, who really leads that exercise. But before giving her the floor, I just wanted to welcome you all to this really exciting panel and this exciting discussion, and I really hope that we will all be able to gain some insight from it. So without further ado, I’ll give the floor to my colleague, Helen, who will do the presentation for us. Helen, over to you.

Helene Molinier: Thank you very much, Papa, and good afternoon to everyone. I hope you can hear me OK. It’s actually great that the technology allows me to be with you today, even though I cannot be here in person. So I’ll start just with a few words to reflect back on the GDC journey and to set the scene before the panel. Just to say that over the last two years, we’ve ignited multi-stakeholder conversation at UN Women level on how to leverage the GDC to drive a more equitable digital transformation. We did that first by publishing a position paper on how to take forward the recommendation of CSW67 and integrate them into the GDC negotiation. A lot of these efforts were led by the members of the Action Coalition on Technology Innovation for Gender Equality, and 10 governments in particular that launched a call to action a year before the GDC to mainstream and prioritise gender perspective in the negotiation. They were also led by civil society organisations, which developed the feminist principle for including gender in the GDC, and this principle emphasised on the need for a right-based, gender-responsive digital framework. In the current multilateral context, reaching consensus on the GDC has been an achievement. And for us, I think the first lesson is that it has been the confirmation that collective multi-stakeholder efforts can clearly contribute to positioning gender perspective into negotiations at the intergovernmental level. As a result of this work and these efforts, what we have now is a GDC that does acknowledge gender disparities, and I think this is a welcome recognition, finally, that digital technologies are far from being neutral. Obviously, another step in the right direction has been that the GDC recognised gender equality and the empowerment of all women and girls as one of the 13 GDC principles. The GDC reaffirms many recommendations from CSW67 agreed conclusions, including the mainstreaming of gender perspectives in digital strategies, the need to address barriers to meaningful, safe and equitable digital access for women and girls, and the fact that we need to promote women leadership and participation in technology and digital decisions. And obviously, that we need to urgently address gender-based violence, which occurs through or is amplified by the use of technology. There are a few other great opportunities in the GDC, especially the fact that it acknowledges the data divides, the need to target capacity building and entrepreneurship for women, or the fact that we need to support inclusive STEM education. And all these are really truly valuable commitments. But we all know that having aspirational language won’t be enough. And so we have a collective responsibility to ensure that the GDC implementation is set on clear targets and accountability mechanisms, so that commitments to gender equality are more than symbolic. We cannot rely only on ethical guidelines. We need to put in place enforceable standards grounded in international human rights. A few of the gaps may be that we have in the GDC. We find them, for example, on emerging topics such as AI and GPI. And here the references to gender equality have not been included in the final draft. And as a result, we feel that it perpetuates gaps that have led to the current state of digital inequity that we have. We cannot have digital technologies or digital infrastructure deployed without assessing their broader associative risk and opportunities. And so having these gaps, it means that women are at risk of being doubly excluded, being excluded first of the economic opportunities that AI or GPI can offer, but also excluded for the governance decision in shaping their deployment. In the GDC, we see calls for investments in connectivity, in transfer of technology, in GPI, but there is no strong call to invest in people. And so the problem at stake today, it’s not just about being online. It’s about not being erased or excluded from digital innovation altogether. And so we have currently a gender divide that is driven by structural inequalities and they are deep rooted and they prevent women from accessing the technical and financial resources that would help them benefit from the digital revolution. And so this is something we feel must be acknowledged, must be reversed. And as Papa mentioned, next year we have two important milestones. We have the review of YSYS, 20 years since its inception, and the review of the Beijing platform, 30 years since it was adopted. And they both offer a unique opportunity, not only to review this mechanism, but also to advance GDC implementation. And so the objective of today’s session is that we collectively shape a cohesive action agenda and chart the path forward and to ensure that gender equality, that women’s digital rights are not merely mentioned, but that they are prioritized in the implementation of all these instruments, YSYS, Beijing, GDC, and that they all come together. And so this is the objective of today’s discussion and we look forward to this exciting panel and over to you, Papa and colleagues. Great.

Papa Seck: Thank you, Hélène. Really, again, I do recall this work and really trying to shape a paper to guide discussions on the GDC, at least trying to, as much as possible, to integrate a gender perspective. And I think this is really one of the, I would say, key achievements of the Action Coalition. So congratulations to you and to colleagues for this excellent work. So now I’ll just turn to the panel. We have a great panel of speakers. So I have Isabel Dossola-Criado, who’s from the office of the UN Secretary General’s tech envoy. Mr. Roy Ericsson, who’s Finland’s global gateway ambassador. Ms. Radka Sibyl, who’s a digital affairs advisor at the EU delegation in Geneva. And Ms. Salah Abdullal, who’s the chief of digital economy and foresight at the Digital Cooperation Organization. And then we have two speakers who are online, Ms. Nandini Chami, who’s the deputy director of research and policy and engagement at IT4Change. And Ms. Talha Debs, who’s the YSYS and SDGs project coordinator at ITU. So I’ll start with you, Isabel, and just a reminder that we have about, I think, four minutes each in terms of answers so that we can stick to the time. So Isabel, who, in your view, what would be the building blocks of a… of gender responsive AI, recognizing both its transformative power, but also the risks that are inherent in it.

Isabel De Sola: How can we leverage the GDC to bridge the gender gaps in AI and governance? Thank you so much for having me. It’s really a pleasure to join this conversation. I’m Isabel De Sola from the UN Secretary General’s Tech Envoy. We, our office recently led a process accompanying a high level advisory body on AI in their research and consultations around the world to emerge recommendations from the field, from countries, from academics, scientists on how to govern AI for humanity. And we held a very interesting session on the question of women and gender and the governance of AI. The results of the high level body’s process are available online. It’s a wonderful report. And a couple of them landed in the Global Digital Compact, which was recently approved at the end of the Summit of the Future. So the high level body offered up its recommendations to the process of the Summit of the Future. And through interaction and negotiation, the member states accepted two and maybe one more of those recommendations, which means that alongside the GDC’s enunciation of a principle for gender equality and empowerment, we also link a way forward on AI governance. So to take your second question first, Papa, it’s rather abstract, to be honest. How do we leverage the GDC? It’s a little bit abstract, but essentially what we’ve put into the GDC are these two issues side by side. So we wanna govern AI for humanity, but we need to do so in a way that is gender empowering. And that’s important because words matter, and this is an agenda that we can take forward to bring stakeholders together to cooperate on concrete tasks, which I think is actually much more interesting to address the question of how do we make sure that AI works for women? Well, the truth is that right now, we’ve got a lot of problems in front of us with AI. So the number one is that it’s data driven, and the data is mostly in seven languages, mostly in English, and it doesn’t represent actually data necessarily generated by women around the world. Certainly not women of color, certainly not women in rural areas who may not have access to connectivity. And underneath that, it’s maybe not girls as well that are generating that data. So the AI models that are being developed at the moment, unfortunately, they’re probably not going to be gender responsive. Not all is lost, because I think that these agendas have done a tremendous job of sounding the alarm from the get-go. So what didn’t happen with the internet, which was to say from the get-go, how does the internet work for women? That’s happening now, is that from the get-go, the UN will take its first steps in implementation of the GDC, keeping the gender empowerment principle and actions in mind. And there’s a second principle that I think can help us from the GDC, which is to work in partnerships. And that’s the end of day one here at the IGF. It’s starting to sound a little bit trite to say that, but I have an example. So we know that current LLMs are not necessarily gender empowering, but if we work in partnerships, and because of the high visibility that’s been created on this risk, we might be able to steward these LLMs and generative AI in a way that is more gender reaffirming. So I heard during our consultations for the high level advisory bodies report, I heard an academic say something that I should give her copyright for, but she said, it takes a village to raise a gender affirming AI. So we need to work with the companies, academics, scientists need to look closely at their data and at their models. Tweak the data, data can be tweaked. We need to roll it out into the world and have civil society accompany its applications in the world, and a feedback loop from civil society to companies to tweak it again. Oh, okay, sorry. And probably I’m talking too much. Let me check the time, I’ve gone through four minutes. Okay, sorry, I’m talking too much. It takes a village, meaning academics and scientists, let’s look at the data together, make it more friendly towards women. Let’s look at the applications. What is the user loop? So some applications, we may not be able to tell ahead of time how those applications will affect women and girls. So working with civil society and governments to have a feedback loop towards the companies and a human in the loop that can help to train up women for the use of these AIs or the reverse, tweak them so that they are more gender friendly. So those are just some concrete, one abstract idea about the policy world and one concrete idea of partnerships. Thank you.

Papa Seck: Thank you, thank you very much. And sorry, it was my headphones that weren’t working. So Roy, now I turn to you. In your view, is the GDC adequately addressing the risks of gender disinformation, discrimination and technology facilitated violence against gender-based violence? Or what do you think are the key measures that are required to ensure that women and girls can benefit from a safe and empowering environment? Well, thank you.

Roy Eriksson: So my name is Roy Eriksson and I’m the Global Gateway Ambassador for Finland. And Global Gateway is a EU initiative to finance big infrastructure projects in the Global South. In my answers, actually, in order to save some time, I will concentrate mainly on the TFGB issue, but I will only refer to the first question that the UN Special Rapporteur, Irene Kahn, tackled the topic of gender disinformation in her report for the 47th UN General Assembly. One of her recommendations is that states should integrate fully gendered perspectives into their policies and programs to address disinformation and misinformation in digital literacy programs. So that is also something that we do in Finland. Already from primary school, we teach media literacy to our kids so that they can have better tools to understand what is real information and what is maybe disinformation of some sort. When preparing and negotiating the Global Digital Compact, gender equality was one of Finland’s top priorities. We are pleased that it is one of the principles highlighted in the compact, emphasizing the cross-cutting nature. There are also concrete commitments related to technology-facilitated and gender-based violence in the compact. From the point of view of this discussion, the most important one is the commitment to develop effective methodologies to measure, monitor, and counter all forms of violence and abuse in digital space. The problem is very real. According to the Economist Intelligence Unit, 38% of women have personal experience of digital violence and 85% of women using the internet have witnessed digital violence against other women. In Finland, latest research shows, for example, that digital violence is the most common form of violence in young people’s intimate relationships. This issue has wider repercussions as it is also a real threat to democracy because it threatens to limit the participation of women and girls in society and societal debate. But what can the GDC do and what should we focus on in implementing this commitment? First, we need more data and evidence to base our actions on. In this, it is critical that the private sector is able to track this phenomena and share data with researchers and authorities. More transparency is needed on the part of platforms where these activities take place. Second, we need effective grievance mechanism where users can report concerns and raise issues followed by action on the part of platform operators and service providers. Third, human rights perspectives need to be mainstreamed in the design of new digital technologies, including digital services, so as to be able to understand their potential human rights impacts and anticipate the need to protect women and girls from risk. And finally, updating our legislative means and ensuring our law enforcement and judiciary systems are capable of addressing TFGB, be our key. Concrete measures need to be taken to ensure that national legislation, policies, strategies and action plans on the prevention and elimination of gender-based violence include online gender-based violence. Law enforcement and the judiciary system need to be able to recognize, prevent, investigate and address this problem. Improving services for survivors is also of critical importance. Finland has taken these issues seriously. The DGC reinforces the need for multi-stakeholder cooperation in implementing the UN Digital Agenda. The compact is not only important for states, but also for non-state actors, the private sector, civil society and academia. The DGC anchors the Agenda for Digital Cooperation firmly under the UN Charter and respects international law and human rights. This responsibility concerns both the states and the private sector, and I hope we can together address TFGB in implementing the compact. And lastly, I would like to mention Finland’s Generation Equality Youth Group as an example of civil society’s active engagement. Established in 2021, it consists of 27 young people who focus on advancing gender equality through advocacy. They have published two manifestos called Right to be Online. The first one is for the technology sector, and the second one, which was published only three weeks ago, is for decision makers on how to tackle gender-based online violence. I will stop here and look forward to this. Is it okay now? Sorry about that. One of my functions at UN Women is actually, I’m the Chief of Research and Data, so we do work a lot on the measurement side, particularly here on the measurement of technology-facilitated violence. And it’s exciting that one of the things that we are now doing is really developing a consistent and broad framework for measuring and monitoring the TFGB. And I think that’s really going to advance us in terms of documenting the problem, measuring it, but also addressing it. So thank you very much. So, Bisabel, let me turn to you. The GDC includes references to digital skills and leadership in technology. Are they sufficient, and how can we address gender-based disparities in digital access and promote women’s representation in tech industries, in STEM education, but also in decision-making roles? Thank you. Thank you so much. And it’s a pleasure to be here, so thanks for having me.

Radka Sibille: My name is Radka Sibila, and I work for the European Union delegation to the United Nations in Geneva. The EU was very closely involved in the negotiations of the GDC, and we are very happy to see that the GDC is so strongly enshrined in the human rights, in the international human rights law, including with regards to the gender perspective. As was already mentioned, the GDC upholds the gender equality and the empowerment of all women and girls. And it also speaks about the different targets and about the different obstacles that we need to overcome. So we would need to work on affordable connectivity, digital skilling, and inclusion of women in the tech positions. But, you know, as was already mentioned, the GDC is just a framework, and it can only be impactful if we are able to implement it together and forcefully. And this is going to be a work of the, you know, all the multi-stakeholder community. And the journey will be long, as we already heard during the high-level segment this morning. You know, the representation of women in the tech force, in the tech workforce is very low still. So, for instance, you know, when I read the statistics from the women in tech in Europe, there is only, you know, women comprise only about 19% of the tech workforce. The number is even lower in the leadership positions. And we also saw that during COVID, for instance, women in tech sector faced a likelihood of being furloughed or laid off twice as high as that of their male peers. We also see in those statistics that only 2.3% of women-led startups, for instance, can get venture capital funding. And if you look at the developing countries, you know, the obstacles and challenges are made even greater by the overall global digital divide and the lack of access to meaningful connectivity. So, what do we need to do? There is yet another obstacle that was also mentioned that once women actually get online, they sometimes face gender-based violence. So, the environment online is not safe enough for them. So, I would just like to highlight maybe three points that we need to concentrate on. First is to increase investments overall in the digital infrastructure including broadband connectivity, but also promotion of digital skills and literacy programs that will particularly target the marginalized communities. We also need to increase women’s representation in tech industries, in the STEM education and decision-making roles. And of course, we need to try to make the online space safer for women with zero tolerance to gender-based violence online. We as the EU, we are trying to do that as Team Europe through our Global Gateway project. I will mention just a couple of them. So, for instance, in Mozambique, we have a great initiative. It’s called Vamos Digital and it’s about creating space for digital skills and coding forces for high school students that target particularly women and girls. And we also cooperate with regard to the overall connectivity and bridging the digital divide. We cooperate with the ITU on a project regarding meaningful connectivity indicators that is being implemented in several countries of the world. And then when it comes to the online digital space, as you might have seen, the EU has produced a number of digital legislation pieces which are based on the human rights-based approach to technologies, and in particularly also trying to make the online space safe for women. So, for instance, the Digital Services Act, which is also targeting the content of the social media platforms, or the AI Act, which is trying to regulate the high-risk situations where AI can be misused or can have a negative impact on human rights, including women’s rights. And I will be happy to then answer more questions. Thank you so much.

Papa Seck: Great. Thank you. Thank you very much. So, Miss Hara, let me turn to you. As you know, the work that you do at Digital DCO is really important. There have been many conversations in the lead-up to the GDC on the need to strengthen capacity and address knowledge, a lot of it that you already do. We also talked about resource gaps a lot, especially for low-income countries. How can we make sure we prioritize the generation of knowledge, build capacities, but also generate resources towards ambitious actions that can help us to really, I think, bridge what a lot of us have been talking about here, which is fundamentally the gender-digital divide. So, what is it, from your experience, can you give us some clues on how we may be able to do that?

Hajjar El Haddaou: Thank you so much. I guess. Thank you so much. So, I think what is very important, I would like to build on what you have said from the EU, is that the GDC are principles that we are all agreeing on. Even at DCO, it’s our core mandate is to ensure that every person, every business, every nation has a fair opportunity to participate and have an equal access to all the opportunities that is offered by the digital economy. And this aligns well with what we are trying to achieve by the GDC and the principles of equitable access for everyone. And you have even shed the light, it has been shed the light, how important that those principles need to be translated into actions. And this is also a part of what we have tried to do is, because we have done a lot of conversation on the GDC, we have been involved. Recently, we have launched what we call the Digital Economy Navigator, which will help in assessing, first of all, if we want to really move forward and have progress towards an equitable opportunity, we need to measure where we are. We really need to understand where we are. understand where is the gap, what is exactly missing in different countries. And this is what we are trying to do with the Digital Economy Navigator, which is, again, based on the importance of society, having a dimension of society, having them a share of a fair opportunity, and having that enabler layer from government with the right Internet access, with the right infrastructure, with the right policies that will enable that. And we assess the country, what are the gaps for those countries, how they can progress. And we have included, there are 50 countries, we have included some of the LDCs countries within our assessment. And from what we are seeing, what we are seeing is that we really need to focus on having those data, and on those data for women, having the right for them to get the right skills to really be part of this acceleration. Isabella talked about AI, and how it’s very important in place, how to include the different languages. And this is even what we are trying to do with all the initiatives in DCO, to build upon those principles that are set in the GDC, to make sure that everyone have that right skills, have the right investment in technology from women, different startups, SMEs, all of them, to make sure that it has to be a multi-stakeholder approach. It’s not upon just one country, or one organization, it needs to be a cooperation, a private sector between the governments, between us as international organizations, to really work together, because no one entity or one nation can solve this issue alone. It needs a unite of efforts and direction, which the GCD is offering already, and I think this is the only approach and the only way that this can happen.

Papa Seck: Great, thank you very much. And I think the theme of multi-stakeholders is really resonating, I think, across the past couple of days. So, I’ll now turn to Ms. Chami, and here, Ms. Chami, my question is regarding something that is not in the GDC, particularly DPI, digital public infrastructure. And just this morning, during the plenary, I think we heard about DPI, but also AI-DPI, and here, again, we see a gender perspective that is fundamentally missing. What do you think are the key principles of thinking gender by design that can be applied here?

Speaker 1: Hi, thank you for the question. I hope that you can hear me online and offline as well. Yeah. So, just to get to that, so let’s first understand what we’re all talking about when we talk about digital public infrastructure. And here, I think that UNDP DPI safeguards working groups’ definition of DPI, referring to an umbrella term that refers to a gamut of secure and interoperable digital system solutions for enabling the delivery of public services, is useful to think about. And when you’re looking at gender by design, feminists have long recognized that infrastructures are not value-neutral artifacts, but rather they are political ecosystems. And we can recall the feminist argument for the right to reproductive health services to be actioned as a feminist infrastructural right through gender-responsive design of public health clinic infrastructure as far ago as 30 years back in the days of the ICPD. And so, DPI is, of course, no exception to the gender by design argument. With this background, now when we look at the question of the first principles of actioning gender-responsive DPI, there are some insights I want to share from our research at IT4Change about some principles. So number one, data governance choices in DPI solutions embody an exercise power, and therefore, design choices must be evaluated through the gender justice lens in all stages of the data lifecycle. So to begin with, with respect to data collection and processing in DPIs that support public service delivery, we must use the principle of data minimization. And further, when we encode gender realities in data categories, we need to pay attention to intersectional power and how it operates, and what kind of database sorting and targeting are we deploying. Secondly, we need to give equal attention to the question of downstream data use. Data governance frameworks of DPIs need to be grounded in feminist data justice visions by protecting the right of all data subjects to dignity, privacy, personal autonomy, and the right to be represented in database decision-making, and most importantly, the right to collectively determine how the social commons of data are preserved and promoted for public value and public benefit. The commons of public welfare data cannot become a free-for-all resource that the market exploits without any benefit sharing with relevant data communities. Secondly, when adopting DPI solutions, particularly in the global South where gender digital divides in access and use continue to persist, we cannot lead to a digital-by-default solution which results in the exclusion of women from full citizenship and access to their rightful entitlements. This also means that rather than looking at mobile as last-mile imaginaries, the older and abiding issue of public access points being citizen kiosks for digital public service delivery, we need to have them as integral to DPI imaginaries. My final point is about reimagining DPI as democratic, participatory, accountable infrastructures because after all, gender inclusion is a project of democracy as the Southern Feminist Movement shows us. What this means is that in the design and development of DPIs, we need to leave no one behind. Institutional safeguards for public consultations to guide techno-design choices in DPI design and rollout should be happening not just with affected communities but also frontline workers, the majority of whom are women, whose labor will be implicated in the transition to digital public service delivery. Further, and most importantly, we need legal guarantees to protect women’s human rights bottom lines in DPI implementation, especially in public-private partnership arrangements that are becoming increasingly common in the turn to AI-enabled public service delivery where AI system operators and AI system providers, the government and private partners will be in new relationships. A legally guaranteed right to explanation in DPI deployment is particularly critical in this context for democratic accountability. Last but not the least, last-mile institutional support for addressing intersectional exclusions and discrimination, the right to grievance redress in DPI systems in welfare service delivery becomes crucial. Thank you so much. Great. Thank you very much.

Papa Seck: And really, you know, again, really excellent principles, and I think, you know, we’ve missed it in the GDC, but we have to make sure that it is part of its implementation. So, and I think this is really, again, a role, there is a role in it for everyone. I think that’s where the issue of multi-stakeholders really do come in. And my final question is to Ms. Tara Debs. The GDC recommends mainstreaming a gender perspective in connectivity strategies. How has the YCIS process been addressing gender mainstreaming? And what are the plans for YCIS plus 20 to continue fostering a more inclusive digital ecosystem and support implementation of the GDC? We can’t hear. Can you hear me now? Yes. Okay. Thank you very much. So, I’m very pleased to take part in this important panel discussion. Since its inception in 2003, the World Summit on the Information Society has set out a vision for harnessing information and communication technologies to promote gender equality.

Speaker 2: The YCIS Declaration of Principles affirms that development of ICTs provides enormous opportunities for women who should be an integral part of and key actors of the information society. The inclusion of women and Girls is paramount to bridging the digital gender divide. It aligns with the WSIS’s vision to build a people-centered, inclusive and development-oriented information and knowledge societies where everyone can create, access, utilize and share information. However, the digital gender divide remains one of the greatest barriers to the meaningful participation of women in society. According to the latest ITU facts and figure in 2024, 70% of women are using the internet, compared to 65% of women. This means that globally, there are 189 million more men than women using the internet in 2024. While significant progress has been made, the estimated 2.6 billion people who remain unconnected are primarily women and girls, especially from LDCs, where progress is actually, unfortunately, moving backward. For the past 20 years, the WSIS process has been instrumental in bridging these issues and bringing them to the forefront. Gender mainstreaming is a cross-cutting issue across all 11 WSIS action lines, which offer a robust framework to promote meaningful, affordable access to digital literacy and empowerment for women, among other objectives. At the annual WSIS Forum, WSIS persistently champions a special track on ICTs and gender mainstreaming, and from this special track, we have launched initiatives such as the WSIS Gender Transactors, WSIS Stocktaking Repository of Women and Technology, which is a unique platform which aims to identify and connect women leaders and practitioners across the digital realm for development and to create spaces for networking, mentorship sessions, and documentations of best practices. I actually invite you all to join this repository. And of course, the WSIS Gender Transactors, who pledge to actively champion and advocate for and promote the inclusion of gender consideration in the digital discourse. Another important aspect to highlight is the WSIS Stocktaking and the WSIS Prizes, serving as a valuable resource, with more than 13,000 projects sharing gender-sensitive projects that promote digital inclusion from across the world and facilitating their replication among multi-stakeholders. For instance, projects like Our Girls, Our Future by Ghana Yielding Accomplished African Women have been recognized in 2021 for addressing the barriers women face in accessing digital technologies, specifically targeting women in underrepresented communities and regions. As we look toward WSISPASS20, gender mainstreaming remains a priority in fostering a more inclusive digital ecosystem. The upcoming review in 2025 provides an opportunity to evaluate progress, identify gaps, and develop strategies that align with the recently adopted Global Digital Compact, where gender perspectives are central to its implementation, and of course, the 2030 Agenda for Sustainable Development. Some of the work includes continue encouraging the development of gender-responsive technology and innovation that take cater to the needs of women and girls through partnerships and multi-stakeholder collaborations. Keep supporting the data collection and gender disaggregated data collection to better inform policies and initiatives using the Partnership on Measuring ICTs for Development. Under WSIS Action Line C4 Capacity Building, keep supporting and scaling Skills Development Program, specifically designed for women and girls in more regions to accelerate progress. Great examples are the ITU’s Equals Global Partnership, focusing on access, skills, leaderships, and research. The AI Skill Accelerator for Girls, which aims to equip young women and marginalized communities with the capacity to become AI creators and not just consumers. Promoting gender-responsive policies, advocating for policies that address affordability, online safety, and equitable access to digital resources for women and marginalized groups. Collaborating with UN Women and other stakeholders on a WSIS Beijing plus 30 Common Action Plan for bridging gender digital divide. Through the open consultation process that is currently being active for the preparations towards the WSIS plus 20 High Level Event in 2025, some WSIS stakeholders are calling for a new WSIS Action Line on Gender. Joining the WSIS plus 20 High Level Event 2025 and contributing to the ICTs and gender mainstreaming special track is also an aspect to work towards the WSIS plus 20 review in 2025. To conclude, over the years WSIS has acknowledged the critical importance of fostering digital gender equality and this recognition stems from the understanding that digital inclusion is a cornerstone for achieving broader social and economic equality. The Global Digital Compact, Beijing plus 30, and the WSIS plus 20 review offer a critical opportunity to reimagine the digital future. By doubling down on gender mainstreaming, we can ensure that this digital revolution leaves no one behind. Thank you very much. Thank you very much, Ms. Demps. So, really, you know, again, a very rich discussion and I think, you know, some of the common themes are being highlighted here and so, Helen, maybe let me just turn to you to see if there are maybe a couple of questions. We still have about five minutes, but just in case there are any questions or comments that are coming through the chat. Thank you, Papa. I know that we’re over time, but at least I think

Papa Seck: we have one speaker. Kitli, are you there? Yeah, I’m here. Hi. Thank you very much. So, hello, colleagues and many friends on this panel. Just to go to the, I’m Caitlin Craft

Speaker 3: Bachman from Women at the Table on A plus Alliance for Inclusive Algorithms. I just want to say, as we look to the lessons on gender equality from the GDC, we have a couple of operational ideas. One is to have a standalone action line for gender in WSIS to add an action line. We know that having a standalone paragraph in the GDC, which we worked very hard to incorporate, is a powerful opportunity. We would obviously also like stronger mainstreaming throughout WSIS, which I know they’re working very hard at, but for everybody to support that and advocate. We also think a digital track at Beijing plus 30 and within CSW in general would be a really fabulous addition to bring together the worlds of digital and gender. And to that end, also the use of CEDAW’s very recent general recommendation 40 for parity for women in all forms of decision-making, including in the technology world, which again, we worked very hard for having them mention AI and technology. So, it would be fabulous if that coordination between UN Women, between WSIS, between OSET, the Office for Technology, would be sort of integrated and worked hard on so that we have the floor, the terrible floor, of technology facilitated gender-based violence. But we also start to create a much higher and much wider ceiling for the possibilities that the new technology can bring. And then finally, to help make those possibilities a reality, we would like for everyone to consider gender-responsive budgeting, and in particular, gender-responsive public procurement would set aside for both women-owned businesses and women-run businesses, for which we now have an ISO standard, but also for businesses that address the inequities that women face and sort of work to create a better enabling environment that way. So, it’s not only about more women and more women studying STEM, but it’s also more operations that actually go towards creating different structural barriers. And finally, sex disaggregated data, sex, age, and geography. Thank you very much.

Papa Seck: Thank you very much, and sorry I had to rush you there. But really, again, I wouldn’t prolong this much. I know you have places to be. But really, just I think, thank you to all of you for joining us and for this conversation, and also for those who have joined us online. It’s, you know, we have 25 participants who are still there, really, again, listening to this conversation. And it’s not a one-and-done. We’ll continue this conversation, including through Beijing, WSIS, and next year. So, thank you very much. Thank you.

H

Helene Molinier

Speech speed

139 words per minute

Speech length

847 words

Speech time

364 seconds

GDC acknowledges gender disparities in digital technologies

Explanation

The Global Digital Compact (GDC) recognizes that digital technologies are not gender-neutral. This acknowledgment is a step towards addressing the gender inequalities in the digital realm.

Major Discussion Point

Gender Equality in the Global Digital Compact (GDC)

Agreed with

Roy Eriksson

Radka Sibille

Hajjar El Haddaoui

Speaker 2

Agreed on

Importance of gender equality in digital technologies

Gender equality is one of 13 GDC principles

Explanation

The GDC has included gender equality and the empowerment of all women and girls as one of its 13 core principles. This inclusion highlights the importance of gender equality in the digital agenda.

Major Discussion Point

Gender Equality in the Global Digital Compact (GDC)

Agreed with

Roy Eriksson

Radka Sibille

Hajjar El Haddaoui

Speaker 2

Agreed on

Importance of gender equality in digital technologies

GDC reaffirms recommendations from CSW67 on mainstreaming gender perspectives

Explanation

The GDC reinforces the recommendations from the 67th session of the Commission on the Status of Women (CSW67) regarding the integration of gender perspectives in digital strategies. This includes addressing barriers to equitable digital access for women and girls.

Evidence

Recommendations include mainstreaming gender perspectives in digital strategies, addressing barriers to equitable digital access, and promoting women’s leadership in technology decisions.

Major Discussion Point

Gender Equality in the Global Digital Compact (GDC)

Agreed with

Roy Eriksson

Radka Sibille

Hajjar El Haddaoui

Speaker 2

Agreed on

Importance of gender equality in digital technologies

R

Roy Eriksson

Speech speed

126 words per minute

Speech length

848 words

Speech time

401 seconds

GDC commits to developing methodologies to counter digital violence

Explanation

The Global Digital Compact includes a commitment to develop effective methods for measuring, monitoring, and countering all forms of violence and abuse in digital spaces. This is particularly important for addressing technology-facilitated gender-based violence.

Evidence

According to the Economist Intelligence Unit, 38% of women have personal experience of digital violence and 85% of women using the internet have witnessed digital violence against other women.

Major Discussion Point

Addressing Gender-Based Violence in Digital Spaces

Agreed with

Radka Sibille

Agreed on

Need to address digital violence against women

Need for more data and evidence on digital violence against women

Explanation

There is a critical need for more comprehensive data and evidence on digital violence against women. This information is essential for developing effective strategies to combat the issue.

Evidence

Research in Finland shows that digital violence is the most common form of violence in young people’s intimate relationships.

Major Discussion Point

Addressing Gender-Based Violence in Digital Spaces

Agreed with

Radka Sibille

Agreed on

Need to address digital violence against women

Importance of effective grievance mechanisms for users to report concerns

Explanation

Effective grievance mechanisms are crucial for users to report concerns and raise issues related to digital violence. These mechanisms should be followed by action from platform operators and service providers.

Major Discussion Point

Addressing Gender-Based Violence in Digital Spaces

Agreed with

Radka Sibille

Agreed on

Need to address digital violence against women

R

Radka Sibille

Speech speed

172 words per minute

Speech length

646 words

Speech time

225 seconds

EU legislation aims to make online spaces safer for women

Explanation

The European Union has developed digital legislation based on a human rights approach to technologies. These laws aim to create a safer online environment for women and address issues related to content on social media platforms and the use of AI.

Evidence

Examples include the Digital Services Act and the AI Act.

Major Discussion Point

Addressing Gender-Based Violence in Digital Spaces

Agreed with

Roy Eriksson

Agreed on

Need to address digital violence against women

Need to increase women’s representation in tech industries and STEM education

Explanation

There is a pressing need to improve women’s representation in the technology sector and STEM education. Current statistics show a significant gender gap in these areas, which needs to be addressed to promote gender equality in the digital realm.

Evidence

Statistics show that women comprise only about 19% of the tech workforce in Europe, with even lower numbers in leadership positions.

Major Discussion Point

Promoting Women’s Participation in Technology

Agreed with

Speaker 2

Agreed on

Promoting women’s participation in technology

Importance of investing in digital infrastructure and skills programs for women

Explanation

Increasing investments in digital infrastructure and promoting digital skills and literacy programs are crucial for bridging the gender digital divide. These initiatives should particularly target marginalized communities to ensure inclusive digital development.

Evidence

EU’s Global Gateway project in Mozambique called ‘Vamos Digital’ creates space for digital skills and coding courses for high school students, targeting women and girls.

Major Discussion Point

Promoting Women’s Participation in Technology

Agreed with

Speaker 2

Agreed on

Promoting women’s participation in technology

I

Isabel De Sola

Speech speed

158 words per minute

Speech length

820 words

Speech time

309 seconds

Current AI models likely not gender-responsive due to data biases

Explanation

Existing AI models are likely not gender-responsive because they are based on biased data. The data used to train these models is predominantly in a few languages, mainly English, and does not adequately represent data generated by women, especially women of color and those in rural areas.

Major Discussion Point

Gender-Responsive AI and Digital Public Infrastructure

Differed with

Speaker 1

Differed on

Approach to addressing gender disparities in AI

Need for partnerships to develop gender-affirming AI

Explanation

Developing gender-affirming AI requires collaborative efforts from various stakeholders. This includes working with companies, academics, scientists, and civil society to ensure AI models and applications are more gender-friendly and inclusive.

Evidence

Suggestion of a feedback loop from civil society to companies to continuously improve AI systems for gender responsiveness.

Major Discussion Point

Gender-Responsive AI and Digital Public Infrastructure

S

Speaker 1

Speech speed

135 words per minute

Speech length

671 words

Speech time

296 seconds

Data governance in DPI must be evaluated through gender justice lens

Explanation

Data governance choices in Digital Public Infrastructure (DPI) solutions represent an exercise of power. Therefore, design choices must be evaluated through a gender justice lens at all stages of the data lifecycle, including data collection, processing, and downstream use.

Evidence

Suggestion to use the principle of data minimization and pay attention to intersectional power dynamics when encoding gender realities in data categories.

Major Discussion Point

Gender-Responsive AI and Digital Public Infrastructure

Differed with

Isabel De Sola

Differed on

Approach to addressing gender disparities in AI

DPI design should include public consultations with affected communities

Explanation

The design and development of Digital Public Infrastructure should involve public consultations with affected communities and frontline workers. This participatory approach ensures that DPI is democratic, accountable, and responsive to the needs of all users, including women.

Evidence

Emphasis on the importance of including frontline workers, the majority of whom are women, in consultations about digital public service delivery.

Major Discussion Point

Gender-Responsive AI and Digital Public Infrastructure

H

Hajjar El Haddaou

Speech speed

120 words per minute

Speech length

505 words

Speech time

250 seconds

GDC offers framework for cooperation on gender equality

Explanation

The Global Digital Compact provides a framework for multi-stakeholder cooperation to ensure equitable access to digital opportunities. This aligns with the core mandate of organizations like DCO to promote fair participation in the digital economy for all individuals, businesses, and nations.

Major Discussion Point

Gender Equality in the Global Digital Compact (GDC)

Agreed with

Helene Molinier

Roy Eriksson

Radka Sibille

Speaker 2

Agreed on

Importance of gender equality in digital technologies

Digital Economy Navigator to assess countries’ progress on digital equity

Explanation

The Digital Economy Navigator is a tool launched to assess countries’ progress towards equitable digital opportunities. It helps in understanding the gaps and missing elements in different countries’ digital ecosystems, including aspects related to society and government enablers.

Evidence

The tool includes assessment of 50 countries, including some LDCs, to identify gaps and areas for progress in digital equity.

Major Discussion Point

Implementing the GDC for Gender Equality

S

Speaker 2

Speech speed

141 words per minute

Speech length

848 words

Speech time

359 seconds

WSIS process champions ICTs and gender mainstreaming

Explanation

The World Summit on the Information Society (WSIS) process has been instrumental in promoting the use of Information and Communication Technologies (ICTs) to advance gender equality. WSIS has consistently championed a special track on ICTs and gender mainstreaming in its annual forum.

Evidence

Initiatives like WSIS Gender Trendsetters, WSIS Stocktaking Repository of Women and Technology, and WSIS Prizes have been launched to promote gender equality in the digital realm.

Major Discussion Point

Promoting Women’s Participation in Technology

Agreed with

Helene Molinier

Roy Eriksson

Radka Sibille

Hajjar El Haddaoui

Agreed on

Importance of gender equality in digital technologies

Need for gender-responsive technology and innovation

Explanation

There is a need to encourage the development of gender-responsive technology and innovation that caters to the needs of women and girls. This can be achieved through partnerships and multi-stakeholder collaborations.

Evidence

Examples include ITU’s Equals Global Partnership and the AI Skill Accelerator for Girls program.

Major Discussion Point

Promoting Women’s Participation in Technology

Agreed with

Radka Sibille

Agreed on

Promoting women’s participation in technology

S

Speaker 3

Speech speed

128 words per minute

Speech length

339 words

Speech time

158 seconds

Suggestions for standalone gender action line in WSIS and digital track at Beijing+30

Explanation

There are proposals to create a standalone action line for gender in WSIS and to include a digital track at the Beijing+30 review. These additions would help to better integrate gender perspectives into digital and technology discussions at major international forums.

Evidence

Reference to CEDAW’s recent general recommendation 40 for parity for women in all forms of decision-making, including in the technology world.

Major Discussion Point

Implementing the GDC for Gender Equality

Agreements

Agreement Points

Importance of gender equality in digital technologies

Helene Molinier

Roy Eriksson

Radka Sibille

Hajjar El Haddaoui

Speaker 2

GDC acknowledges gender disparities in digital technologies

Gender equality is one of 13 GDC principles

GDC reaffirms recommendations from CSW67 on mainstreaming gender perspectives

GDC offers framework for cooperation on gender equality

WSIS process champions ICTs and gender mainstreaming

Multiple speakers emphasized the importance of recognizing and addressing gender disparities in digital technologies, as reflected in the Global Digital Compact (GDC) and other international frameworks.

Need to address digital violence against women

Roy Eriksson

Radka Sibille

GDC commits to developing methodologies to counter digital violence

Need for more data and evidence on digital violence against women

Importance of effective grievance mechanisms for users to report concerns

EU legislation aims to make online spaces safer for women

Speakers agreed on the urgency of addressing digital violence against women through various means, including data collection, grievance mechanisms, and legislation.

Promoting women’s participation in technology

Radka Sibille

Speaker 2

Need to increase women’s representation in tech industries and STEM education

Importance of investing in digital infrastructure and skills programs for women

Need for gender-responsive technology and innovation

Speakers emphasized the need to increase women’s participation in technology fields through education, skill development, and targeted investments.

Similar Viewpoints

Both speakers highlighted the importance of addressing gender biases in AI and data governance to ensure more inclusive and equitable digital technologies.

Isabel De Sola

Speaker 1

Current AI models likely not gender-responsive due to data biases

Data governance in DPI must be evaluated through gender justice lens

Unexpected Consensus

Multi-stakeholder approach to digital gender equality

Helene Molinier

Hajjar El Haddaou

Speaker 1

GDC reaffirms recommendations from CSW67 on mainstreaming gender perspectives

GDC offers framework for cooperation on gender equality

DPI design should include public consultations with affected communities

Despite coming from different sectors, these speakers all emphasized the importance of a multi-stakeholder approach in addressing digital gender equality, suggesting a broader consensus on collaborative efforts.

Overall Assessment

Summary

The main areas of agreement include recognizing gender disparities in digital technologies, addressing digital violence against women, promoting women’s participation in technology, and the need for gender-responsive AI and data governance.

Consensus level

There is a high level of consensus among the speakers on the importance of gender equality in the digital realm. This strong agreement implies a shared understanding of the challenges and a collective commitment to addressing them, which could facilitate more coordinated and effective actions in implementing the Global Digital Compact and related initiatives.

Differences

Different Viewpoints

Approach to addressing gender disparities in AI

Isabel De Sola

Speaker 1

Current AI models likely not gender-responsive due to data biases

Data governance in DPI must be evaluated through gender justice lens

While both speakers acknowledge the need for gender-responsive AI, Isabel De Sola focuses on partnerships to develop gender-affirming AI, while Speaker 1 emphasizes the importance of data governance and evaluation through a gender justice lens.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to implementing gender equality in digital spaces, particularly in AI development, data governance, and addressing digital violence.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the overall goals of promoting gender equality in digital spaces and implementing the GDC. The differences mainly lie in the specific strategies and focus areas each speaker emphasizes. This level of disagreement is not likely to significantly impede progress on the topic, but rather suggests a need for integrated approaches that combine various strategies to achieve comprehensive gender equality in digital spaces.

Partial Agreements

Partial Agreements

Both speakers agree on the need to address digital violence against women, but they propose different approaches. Roy Eriksson emphasizes the development of methodologies within the GDC framework, while Radka Sibille highlights EU legislation as a means to create safer online spaces.

Roy Eriksson

Radka Sibille

GDC commits to developing methodologies to counter digital violence

EU legislation aims to make online spaces safer for women

Similar Viewpoints

Both speakers highlighted the importance of addressing gender biases in AI and data governance to ensure more inclusive and equitable digital technologies.

Isabel De Sola

Speaker 1

Current AI models likely not gender-responsive due to data biases

Data governance in DPI must be evaluated through gender justice lens

Takeaways

Key Takeaways

The Global Digital Compact (GDC) acknowledges gender disparities and includes gender equality as a key principle, but implementation with clear targets and accountability is crucial.

Technology-facilitated gender-based violence is a major concern that requires better data, reporting mechanisms, and legislative responses.

There is a significant need to increase women’s representation in tech industries, STEM education, and digital leadership roles.

Current AI and digital public infrastructure (DPI) systems often lack gender-responsive design and need improvement.

Multi-stakeholder cooperation and partnerships are essential for implementing the GDC and addressing gender digital divides.

Resolutions and Action Items

Develop effective methodologies to measure, monitor, and counter digital violence against women

Increase investments in digital infrastructure and skills programs targeting women and marginalized communities

Integrate gender perspectives into policies and programs addressing disinformation

Create a Digital Economy Navigator to assess countries’ progress on digital equity

Support the WSIS Gender Trendsetters and Repository of Women in Technology initiatives

Collaborate on a WSIS-Beijing+30 Common Action Plan for bridging the gender digital divide

Unresolved Issues

How to effectively address gender biases in AI and large language models

Specific mechanisms for enforcing GDC commitments on gender equality

Strategies for increasing venture capital funding for women-led tech startups

Methods to ensure gender-responsive design in digital public infrastructure

Approaches to balance digital-by-default solutions with inclusion of women lacking digital access

Suggested Compromises

Partnering with tech companies, academics, and civil society to develop more gender-affirming AI systems

Balancing digital service delivery with maintaining non-digital access points for those lacking connectivity

Implementing gender-responsive public procurement to support both women-owned businesses and those addressing women’s inequities

Thought Provoking Comments

We cannot have digital technologies or digital infrastructure deployed without assessing their broader associative risk and opportunities. And so having these gaps, it means that women are at risk of being doubly excluded, being excluded first of the economic opportunities that AI or GPI can offer, but also excluded for the governance decision in shaping their deployment.

speaker

Helene Molinier

reason

This comment highlights the critical importance of considering gender impacts when developing and deploying digital technologies. It introduces the concept of ‘double exclusion’ for women in the digital realm.

impact

This set the tone for much of the subsequent discussion, emphasizing the need for gender-responsive approaches in AI and digital infrastructure development.

It takes a village to raise a gender affirming AI. So we need to work with the companies, academics, scientists need to look closely at their data and at their models. Tweak the data, data can be tweaked. We need to roll it out into the world and have civil society accompany its applications in the world, and a feedback loop from civil society to companies to tweak it again.

speaker

Isabel De Sola

reason

This comment introduces a collaborative, multi-stakeholder approach to developing gender-affirming AI, emphasizing the importance of ongoing feedback and adjustment.

impact

It shifted the conversation towards practical solutions and highlighted the need for continuous engagement between tech developers and civil society.

Data governance choices in DPI solutions embody an exercise power, and therefore, design choices must be evaluated through the gender justice lens in all stages of the data lifecycle.

speaker

Nandini Chami

reason

This comment brings attention to the power dynamics inherent in data governance and the need to consider gender justice at every stage of data use in digital public infrastructure.

impact

It deepened the discussion on DPI by introducing a critical feminist perspective on data governance and design choices.

We need to put in place enforceable standards grounded in international human rights.

speaker

Helene Molinier

reason

This comment emphasizes the need for concrete, enforceable measures to ensure gender equality in digital spaces, moving beyond aspirational language.

impact

It shifted the focus towards actionable steps and policy measures, influencing subsequent discussions on implementation strategies.

Overall Assessment

These key comments shaped the discussion by consistently emphasizing the need for gender-responsive approaches in digital technology development and governance. They moved the conversation from identifying problems to proposing solutions, highlighting the importance of multi-stakeholder collaboration, data governance, and enforceable standards. The discussion evolved from general principles to specific strategies for implementing gender equality in digital spaces, with a strong focus on practical steps and policy measures.

Follow-up Questions

How can we ensure that the GDC implementation is set on clear targets and accountability mechanisms?

speaker

Helene Molinier

explanation

This is important to ensure commitments to gender equality are more than symbolic and lead to concrete action.

How can we develop effective methodologies to measure, monitor, and counter all forms of violence and abuse in digital space?

speaker

Roy Eriksson

explanation

This is critical for addressing technology-facilitated gender-based violence and making the online space safer for women.

How can we increase investments in digital infrastructure, digital skills, and literacy programs that particularly target marginalized communities?

speaker

Radka Sibille

explanation

This is essential for bridging the gender digital divide and ensuring women’s meaningful participation in the digital economy.

How can we implement gender-responsive data governance in Digital Public Infrastructure (DPI) solutions?

speaker

Nandini Chami

explanation

This is crucial for ensuring that DPI design and implementation do not perpetuate or exacerbate existing gender inequalities.

How can we integrate gender perspectives into AI governance and development?

speaker

Isabel De Sola

explanation

This is important to ensure that AI systems are gender-responsive and do not perpetuate biases against women and girls.

How can we improve data collection and gender-disaggregated data to better inform policies and initiatives in the digital sector?

speaker

Tala Dabbs

explanation

This is crucial for understanding the gender digital divide and designing effective interventions to address it.

How can we implement gender-responsive budgeting and public procurement in the technology sector?

speaker

Caitlin Kraft Buchman

explanation

This could help create a better enabling environment for women-owned and women-run businesses in the tech sector.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World

WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World

Session at a Glance

Summary

This discussion focused on AI regulation and governance, particularly exploring the global impact of the European Union’s AI Act. Panelists from various backgrounds discussed the potential for the EU AI Act to become a de facto global standard, with mixed opinions on its likelihood. Key challenges for global AI standardization were identified, including scoping issues, achieving consensus, and translating fundamental rights into technical standards.

The role of civil society in AI governance was emphasized, with participants highlighting the importance of monitoring governments, advocacy, and facilitating dialogue. The discussion also addressed the unique challenges faced by developing nations in leveraging AI while upholding human rights. These challenges include digital divides, lack of quality data, and capacity issues in both technological implementation and policy-making.

Panelists explored the differences between internet and AI standardization, noting that AI was not built on standards from the outset like the internet was. The potential for big tech companies to resist EU regulations was discussed, with the EU’s stance being that responsible AI development is non-negotiable for market access.

The discussion concluded by addressing concerns about AI’s broad impact across various fields, including healthcare and neurotechnology. Participants stressed the need for ongoing monitoring, impact assessments, and civil society engagement to ensure responsible AI development and use. Overall, the session highlighted the complex challenges in creating effective global AI governance while balancing innovation, regulation, and human rights considerations.

Keypoints

Major discussion points:

– The potential global impact of the EU AI Act and whether it will become a de facto global standard

– Challenges for international standardization of AI, including scoping, finding experts, and achieving consensus

– The role of civil society in AI governance and enforcement

– AI development and regulation in the Global South, including capacity building needs

– Balancing innovation and regulation/control of AI technologies

Overall purpose:

The purpose of this discussion was to explore AI regulation and policymaking from a global perspective, gathering views from different stakeholders on key issues related to AI governance, standardization, and implementation.

Tone:

The overall tone was informative and collaborative. Panelists shared their expert perspectives in a constructive manner, while also encouraging audience participation through polls and questions. The tone remained consistent throughout, with speakers building on each other’s points and addressing audience questions thoughtfully.

Speakers

– Auke Pals: KPMG

– Lisa Vermeer: Ministry of Economic Affairs in Netherlands, implements European AI Act in Netherlands

– Ananda Gautam: Open Internet Nepal

– Juliana Sakai: Executive director of Transparency Brazil

Additional speakers:

– Wouter Cobus: With the platform Internet Standards

– Karen (no surname available): No specific role/title mentioned

Full session report

AI Regulation and Governance: A Global Perspective

This discussion explored the complex landscape of AI regulation and governance from a global perspective, bringing together experts from various backgrounds to address key issues in AI policymaking, standardisation, and implementation. The session included interactive voting elements and was constrained by time limitations.

EU AI Act and Its Global Impact

A central focus of the discussion was the potential global impact of the European Union’s AI Act. Lisa Vermeer, from the Ministry of Economic Affairs in the Netherlands, presented arguments both for and against the Act becoming a de facto global standard for AI governance. While the Act’s comprehensive approach could influence AI development worldwide, Vermeer noted that it might not be suitable for direct replication in other regions due to differing regulatory contexts.

Ananda Gautam, representing civil society from Nepal, highlighted the Act’s potential influence through extraterritorial jurisdiction, particularly on developing nations. This perspective underscored the far-reaching implications of EU regulations beyond its borders.

Auke Pals, from KPMG, raised concerns about potential resistance from big tech companies in complying with EU AI Act requirements. This point introduced the complex dynamics between regulators and industry players in shaping the future of AI governance.

Challenges in Global AI Standardisation

The discussion revealed significant challenges in achieving global AI standardisation. Pals pointed out issues such as fragmentation and overlap between different standardisation bodies, as well as the difficulty in balancing regulation, standardisation, and innovation. The rapidly evolving nature of AI technology was identified as a major obstacle to effective standardisation. A suggestion was made to learn from internet standardization processes in developing AI standards.

Gautam brought attention to the lack of capacity in developing nations to implement or create AI standards, highlighting a crucial gap in global AI governance. This perspective emphasised the need for inclusive approaches that consider the diverse contexts and capabilities of different nations.

Role of Civil Society in AI Governance

The importance of civil society in AI governance emerged as a key theme, with strong consensus among speakers. Juliana Sakai, from Transparency Brazil, shared insights from the Brazilian experience, highlighting how existing legal frameworks can be leveraged to challenge AI implementations. She emphasized the role of civil society in:

1. Monitoring government use of AI systems

2. Advocating for transparency and accountability in AI implementation

3. Facilitating dialogue between stakeholders on AI governance

Gautam added the importance of capacity building and raising awareness about AI impacts, particularly in developing nations. This multifaceted role of civil society was seen as essential for ensuring responsible AI development and use globally.

AI Challenges and Opportunities for Developing Nations

The discussion highlighted both challenges and opportunities for developing nations in the context of AI. Gautam elaborated on issues such as the digital divide, language barriers, and lack of quality data and technological capacity, which could hinder AI adoption and development. However, he also emphasised the potential to leverage AI in addressing development challenges, particularly in education and healthcare sectors.

The need for frameworks to ensure AI upholds human rights globally was stressed, with particular emphasis on accommodating the needs of developing nations in global AI governance structures. This perspective underscored the importance of inclusive approaches to AI regulation and development.

AI in Military Contexts

A significant point raised during the Q&A session was the potential use of AI in military contexts. The discussion touched on concerns about Gaza being used as a testing ground for AI in warfare. This highlighted the critical need for ethical considerations and international regulations regarding the use of AI in military and security domains.

Ethical Considerations and Societal Impact

The discussion also touched on deeper ethical considerations and the broader societal impact of AI. Concerns were raised about AI’s potential to replicate and amplify human biases, particularly in sensitive areas like healthcare. This broadened the conversation beyond regulatory frameworks to include the ethical implications of AI’s increasing role in society.

Conclusion

The discussion provided valuable insights into the current state of AI governance globally, highlighting the complex interplay between regulation, innovation, and ethical considerations. While many questions remain open-ended, the session underscored the need for ongoing dialogue, collaborative approaches, and flexible governance frameworks. These frameworks must be able to adapt to the rapidly evolving AI landscape while addressing fundamental concerns about fairness, transparency, and human rights across diverse global contexts.

Session Transcript

Auke Pals: are joining this session. My name is Auka, Auka Pauls. I work for KPMG. We’re here today in the AI regulation unveiled session. What we’re trying to do in this session is exploring AI regulation. And also, we’re trying to interact with you as much as possible in this session so we can gather also your views on AI regulation and policymaking worldwide. I’m here not alone. I’m here, next to me is Lisa, Lisa Vermeer. Welcome. Liliana is joining us online. Welcome as well. And Ananda is here, also next to me in the room. Welcome all. Can I give you the floor, Lisa, to introduce yourself?

Lisa Vermeer: Yes, thank you so much. My name is Lisa Vermeer. I work at the Ministry of Economic Affairs in Netherlands. And one of my main jobs is to implement European AI Act in Netherlands. I need to add that to my introduction.

Ananda Gautam: Hello, everyone. My name is Ananda Gautam. I’m from Nepal. I work with Open Internet Nepal. And I belong to civil society community. I work in capacity building of young people and making internet more transparent, inclusive, and sustainable.

Auke Pals: Thank you. Liliana, can I give you the floor as well?

Juliana Sakai: Yes, sure. Can you hear me? Yes, I can hear you. So thank you so much. I am Juliana Sakai. I’m the executive director of Open Internet. of Transparency Brazil, which is an independent NGO devoting to promote more transparency and accountability under the Brazilian government. And this is also includes the government’s use of AI. So it has been monitoring and working on how the Brazilian government is deploying development and using AI and producing recommendations on this field. And parallelly, also monitoring how the AI regulation is being discussed and the Congress, right?

Auke Pals: Thank you. Thank you very much. So this is our panel for today, but we’re here today in the interactive session. So I would encourage you all to join and join the discussion once we’re there. But that’s first, I would like you to participate in the vote. So you can scan the QR code or go to kpngvote.nl and log in with the code IGF2024. So as a starter, we’d like to introduce to you the global impact of European AI regulations. And for this, I would like to give the floor to Lisa.

Lisa Vermeer: Thank you so much. Well, this policy question, I would like to start first with doing the poll online and then we can see what gets out of it. So let’s see if it works. So the question is, do you believe that the EU AIX will become the defunct? So global standard for AI governments, is it’s always claimed as it’s one of the first comprehensive AI laws in the world. There are many other laws, but the question is, will it become the global standard? So what do you think, say yes or no, or do you have actually no idea what AI is about? It’s also possible. We do have six votes in already. Don’t be afraid to just choose something. Although you might have a nuanced opinion. We’re looking at the room. I guess most of them voted right now. So let’s go to the results. Interesting, so everyone knows what the AI is. That’s a good thing to know. It’s 55% yes and 44% no. Well, slight preference for yes or no. I’m really looking forward to hearing more about your perspectives on how this would work. So for this session, I would like to share my thoughts about why it can be yes and why it can be no, and what is, in my perspective, a challenge for all of us. So if you look at the European AI Act, it’s product safety. So the idea is that all AI systems that enter the market in the EU, in the whole European Union, they are, you can assume that they are safe because the AI gets all the requirements together for risky AI, several types of risky AI, and if the AI system is in one of these categories, it has to meet certain requirements before it can be sold or before it can be used in the EU by the private sector, by the public sector, by basically everyone. So that means that for lots of AI systems, there will be requirements that make it actually safer. And then safety, you can look at it for, it’s… This, maybe you can move it out, what?

Auke Pals: You have to hold it closer.

Lisa Vermeer: Okay, thanks for the, yeah, perfect. This is better for the, I think for the audience. Thanks. So the safety of all AI systems will be improved, and that means that there are requirements for secure AI, for healthy AI, and for fundamental rights abiding AI. So the risks that may come with AI on these areas will be tackled by all AI systems before they enter the market, at least that’s the premise of the law. So that means that these systems, when they are made by European companies, big companies all across the world, if they are made for the European markets, they will be safe enough and meeting all the requirements, which may have, and is presumed to have the impact that lots of companies will build one type of AI systems to sell everywhere. Because of the EU’s requirements, they will build a system, for example, for the health sector to use in a hospital, and then they will meet the requirements for the AI, excel it in Europe, but then also other areas of the world will benefit from the fact that this AI is meeting the requirements, also when it’s sold in, for example. and both the hospital in Nepal or any other area in the world. So for a whole range of topics that is going to be the case. And that makes me expect that there’s some kind of, you can say, yes, because it will set the standards and then it will be the standard for lots of AI across the world. But there’s a whole area of risky areas in the AIX where it may be a bit difficult to say whether in the future the requirements for the AI will really be adopted. For example, if you look at critical infrastructure or of biometrics AI, they will be regulated, but there is, it’s pretty, we can expect that some companies will build multiple products. So they will make the safest products for you, but then also build other products that do not meet the same, for example, data safety requirements or other requirements. And then they sell it in the rest of the world. You see that happening a lot. And yeah, so it depends on the incentive for the company, whether they are going to make one product for the whole world or just one that is super secure for the EU. That’s why the yes is maybe a bit limited in the end. So I also say there is very much a case for no as an answer, because the EU is a very specific area in the world in terms of regulation. There’s already a lot of regulation has been adopted for the digital economy and for personal data, for example, with the GDPR and also the data act. And there’s really a very dense regulatory field which may not, it’s very different than the regulatory ecosystem in other areas of the world. For example, big areas like India or maybe Brazil, like Guyana, it’s very different. The legal context basically where the law can be adopted. And that’s why the AI design may not be suitable for other areas of the world to replicate. And also the product is making product regulation is a very old way of regulating the markets. There’s lots of product regulation in the EU, but it may not be the approach in other areas of the world why it’s not that easy to replicate. And then another challenge that I wanted to share with you is that the AIX enforcement promises to be really challenging because the AIX is very broad and it sets rules for different areas, a whole range of areas, basically touching the whole, all industries and all public areas and how do you effectively enforce a law? How do you make sure that the regulators, not policy and lawmakers like myself, but the regulators that are going to oversight, to do the oversight of the law, really are able to work with it and make sure that people and companies and organizations abide by the law. So already in the EU, this is a major challenge and it’s something that I’m really working on a lot and like wrapping my head around. And I think that is also the case in lots of other areas and worldwide, it may be quite difficult to have a law like the AI Act in other areas because how do you enforce a law which is so broad and that creates uncertainty. So yeah, I think I leave it at that.

Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction and hopefully could be adapted once we learn how to make use of it. But that’s also part of the market, I guess, and also a part of the standardization bodies that are involved in making standards for AI, which is also the bridge to the next policy question. But this policy question is about the actions for international standardization bodies. Currently, there are loads of standardization bodies trying to get involved in the AI standardization. But we also see some challenges for those global standardization bodies. And now again, a question for you all, what are the biggest challenges? for global standardization of AI. So I would like to encourage you to vote again, and the answers will be on the screen. I’ll give you some time to grab your phone or log in at the space page. So scoping indeed, so what do we consider AI? Find experts, achieve consensus, compatibility. Translate fundamental rights to technical standards. Finding common ground. Different views on the existence of human beings. AI staying or just the next blockchain hype, also interesting. Capacity of experts, local contacts, trust. Tough question. Universal concept understanding that AI replicates traits of the minds and persons integrate ethical and human rights standard language models. So we’ll move on to the next one. So really interesting. challenges and I really think those are all true to be honest and we are indeed maybe in the early stage of AI and AI standardization so currently a lot of standardization bodies are active in the field of trying to come up with standards ethical guidelines technical standards however what we also see is that those standards are quite fragmented all different kind of standardization bodies are trying to deal with with AI in their own certain way and currently we indeed see maybe it’s a I don’t think it’s a hype so I don’t believe in being at the next blockchain hype so as one of the participants you just mentioned however what I do see is that there is quite some overlap between standardization bodies and initiatives and trying to make a standard according to their best practices and that also comes to the to the second point I was trying to make and that’s the sector specific complexity so what we what we what I do see in my work is that some standards are being created are quite generic and those might be applicable for all kinds of AI use cases however some sectors do really want to get a more steering on how to how to make use of those standards so for instance in healthcare does require way different standards than the mobility industry for autonomous cars or the defense industry does also require different standards and they might even, yeah, not publicly shared. My third point is that there are really cultural and regulatory diversities. So Lisa just mentioned the EU AIX, which is applicable in the EU and for EU inhabitants. And that’s according to the EU way of thinking. And while we create ethical standards or ethical guidelines in one of the standardization bodies, that might really differ where those guidelines are created, which can contribute to a good debate, which we might be able to have in the future on a global scale as well. My fourth point is how can we balance this? So balancing between regulation, standardization and innovation. So it was mentioned in a different session that I attended today is how can we make also a balance between the regulation side of AI initiatives and innovation. So a small startup is not the first one to be looking at as industry standard, for instance. They will just create, according to their best practices and their way of working, which my regulation could hinder those initiatives. startups in being innovative. And my last point is the dynamic technology landscape. What we did see in regulation is when the EU AI Act was being developed, generative AI was not that big in the beginning, but later on in the final stages of the negotiation, it became quite big and there was an urge and a need to regulate that. That’s the same with standardization, all the standardization takes a lot of time and is actually at the end created on the basis of what works the best. And with a sector that’s being really innovative and we’re trying to take all the newest technologies available, this might be a challenge for standardization on global scale. And with this, I would like to conclude my introduction remarks and would like to give the floor to Juliana.

Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing enforcement through civil society. And I would like you to answer in your opinion, what is the most significant role civil society can play in AI governance? Please share with us your thoughts. Login, share your opinion. So it’s coming right now. Monitor the government. Don’t think in problems, think in solutions. Capacity building. Advocacy. Vote for parties that have good plans. Voice concerns. As user of public services, it’s important to have a vote with consultants, democratic control, reserve the potential impact of AI system, defending and protecting values and public interest, facilitating dialogue, participating in standardization process, monitor the government. So we are back again. Thank you for sharing so much. Make the minority heard, yeah. So I would like to, as a member of civil society, talk a little bit about the context we are now under the regulation of AI coming. But I would like to share, I would like to split two different contexts. So one is the context in which specific AI regulations are currently being implemented. implemented, like the EU, for example, or being directly affected, not like the US, like Liza mentioned. So the producers, the tech companies that are selling, so to say, AI systems to the European Union will have to comply with the EU legislation. And the other context is where it’s like Brazil, for example. We don’t really sell products to the EU, not massively, technologically. And so let’s say the global majority that is not being affected directly, at least, by the legislation, but might have experienced an indirect influence. But let’s begin where AI regulations are currently being implemented. So as a new legislation, the civil society, watching a new legislation being implemented, the civil society has a huge role in shaping its enforcement. So since identify what are the problems, where implementation is not working and why, and advocate for institutions to take measures. But in order to do all these assessments, understand what is working and why it’s not working, and then, at the end, present these problems to society and to the institutions, we have to have real transparency. So one thing is being able to assess information both. from the government and the companies to understand what is working and what is not working. And so once we secure a less of transparency, then we can, at the end, really do like the report analysis and so on and all the digital consumer. And the second environment, not as like I mentioned, where we don’t have a specific AI framework to more like sell products to the EU. And I want to share like a little bit of Brazilian experience and civil society experience so far on AI governance without having a specific framework for this. So I think that the first thing we have to think about is really to understand what is the legal framework, the current existing legal framework that can actually protect rights in the context of AI use. And in Brazil, for example, we have consumer’s rights and the general data protection. And both of them have been used even to avoid the abuse of some, the use of some tools, especially the facial recognition systems. And civil society from this point, can present like complaints to Brazilian institutions, where, for example, the IDEC, which is the Institute for Consumer Protection in Brazil, has filed administrative. and judicial procedures against the use of this tooth in different scenarios. And for example, when the San Paolo’s subway started she using it for marketing and advertising purposes and collecting information on the reaction of people watching it. And also in the clothing stores where they were also recording and capturing information of through facial recognition systems. And in both case, we have an absolute no consent of the consumer. So under this situation, the Institute for Consumers Protections won the case both in judicial and also in administrative sphere. And so that the Metro and this clothing store stopped using. So I think this is more or less just what I wanted to try to start the conversations and make sure that even in the context where we do not have a specific in a specific AI framework, we still have a lot to work on the governance of this AI systems. And just to close it, we are currently actually discussing as I mentioned in the Congress or AI bill and the risk approach framework is also being discussed there. So this is also where at the end of the day, civil society is also fighting for protecting its right on a more specific way on the. the dangers of AI systems.

Auke Pals: Thank you very much for your great info. And yeah, we really see the importance of civil society being active in this. So it’s great that you and your organization is being part of that. And before we move on, I’d like to look at the chat as well. Okay, we’ll do that after the next speaker. And then I’ll share my screen again. And then I’m giving the floor to Ananda.

Ananda Gautam: Thank you so much for describing the role of civil society. I think Juliana has started doing that. So I’ll be discussing from the Global South perspective, maybe how we have been starting the discussion saying, will EU AI Act be a de facto regulation for AI? And I think it is the most comprehensive legislation that is existing today. And another kind of thing is it’s extraterritorial jurisdiction, which means that it can regulate AI products and services that are outside of EU as well. So in the context of developing nations and on the context of human rights, so the basic fundamental of AI system that we talk is about the quality of the data and the bias of the data. So looking from that perspective, there are two things that need to be considered. you did. One is social thing, another is technological thing. So the fundamental data, quality of data is what makes or what trains the algorithm of the AI. So in the developing nations, we strive to get the, there are many challenges, you know, we are talking about AI governance, but there still exists digital divide and we are also talking about AI divide now. So there are still, in global South, if we see there are still more than 2 billion people who don’t have access to internet itself. And then like the data standards, quality, the collection of data, I think that is very challenging. And without the standard data, it is very challenging to make the AI models. It might create bias or like it might not be as efficient as it was seemed to be. Another is technological that in the developing nations, they lack the capacity to actually either implement it or like build the models. Or like if we talk about the policy legislations, they also lack the capacity to build their own legislations. And after the EU, many countries are trying to get their legislations, but they don’t do it correctly because they don’t have that capacity. So how do we develop their capacity is one thing. And another is like, we are talking about how it can be leveraged in developing nations. It is if used correctly, maybe we can use it to close the digital divide. Maybe we can use it to empower the population that does not have as equal digital literacy as a person living in New York or like somewhere in the EU region. whether digital literacy is no more a problem, or access to Internet is no more a problem, or access to technology is not a problem. When we come to developing nations, there are many other access as well. There’s a language barrier. AI models cannot work in the native languages still. If we want to train them, we don’t have enough data, and if they are trained using the publicly available data, there are other consequences of copyright and other things. This makes the development of AI a bit complex. But if the developed nation help those countries to build the capacity, maybe this developing nation can leverage the power of AI to actually complement the issues that we have been facing. To complement giving the access to technologies, making systems more accessible in terms of language or any other barriers that we have, we might be able to even complement it in having medical facilities in the rural areas, or enhancing the education system by implementing the AI system in the education. We can create virtual teachers who can interact with them, or personalized tutor can be implemented. There are many cases where AI could be leveraged in developing nations, but we have to be very mindful that these considerations are made that it is a global debate. How do we make AI system more responsible? That should be both societal and then technological. If there are already biases in the society, AI algorithm will definitely be biased. Until and unless the bias in the society is eliminated, I think, because it is based on the data that is available in the society, it is the data that we have created. So we have to be very mindful what we feed the AI system, that is very important. And if we do that from a, because there is one thing in the AI ecosystem that developing nation didn’t have, like during the 2000 dotcom boom, developing nation couldn’t leverage the power of internet like the developed nation could do that we are telling today. But AI is in a very premature phase. And if we could accommodate the needs of those developing nations and then leverage the AI, I think we can make them way more prosperous in terms of economy, in terms of other social benefits. I would like to stop here.

Auke Pals: I think we have to go for a discussion and then like, yes, we have support here. Thank you. The question is, does the AI give possibilities of leveraging AI in developing nations while upholding human rights? No, not by itself. Yes. I don’t think so. Human rights is not global. enforceable yes as a guide to develop local regulatory frameworks possibly on the contrary yes global organizations I think are being developed influenced being inspirational only to European companies operating elsewhere I’m not by itself is anyone in the panel wants to reject on the they’re rotating right

Ananda Gautam: I think there is something about global east and America’s will commit so my response would be it is like AI act itself cannot be leveraged to upholding the human rights because it will be more focused on what can be done and what can’t be done because legislations are always focused on do’s and don’ts of something but if we have rather policies that will accommodate the development of AI or how other nations are like the extraterritorial juridiction might also be how developed nation can help the developing nations to actually leverage this can be one of the options another is to uphold a human rights there are various frameworks UNESCO has one and then OECD is working on their second iteration. So those kind of frameworks would be one of the fundamental practices that could help on ensuring the human rights, not in only developing context, but in the global context.

Lisa Vermeer: Thank you. Yeah, it’s those questions. That’s good. Yeah, we plan to have like breakout discussions, but given the fact that it’s already quarter to six, we have until six o’clock and we thought, let’s just do plenary, take questions from the floor and then also from the online participants. Maybe first the question that was asked in the chat. It was about Gaza being a test ground for using AI, which I think is very urgent and has been quite shocking. Thanks, Lisa, to see that. So I’m taking the question from the chat about Gaza being the testing ground for several sorts of AI. It’s rather difficult to answer this question because there is a lot of nuance about it. So let me first say that the AI Act in the EU is, of course, an initiative to try to make AI more responsible and to avoid AI systems globally that really pose lots of risks to, for example, safety and fundamental rights of people. But the AI Act is not touching all AI systems. So the context of it is that there’s also a discussion about, for example, AI in the military domain and also military domain and defence and national security is excluded from the AI Act. But that does not mean that there’s no ongoing discussion about these areas. Even rather more so, there has been a long discussion already, especially on the international level, for example, about responsible use of AI in the digital, in a military domain, also initiated by the Netherlands, it’s called a Re-AIM trajectory. And you also have a long Geneva, mainly Geneva-based conversation about lethal autonomous weapons. Of course, that means that DASA was still, there was still a lot of AI was used there. And it’s, I’m afraid, that’s my personal opinion, I’m afraid that AI will be used for very bad purposes. But the discussion about how to tackle this, how to disincentivize this, how to make this impossible is really on the table, very straightforward. It’s being discussed between stakeholders, between governments and in UN bodies. So it gives some hope that is on the table and that there may be change. But that’s what we are, that’s where we are now.

Auke Pals: Thank you, Lisa, for answering the question from the chat, which is also a really urgent topic indeed. I would also like to ask a question to the audience, because in the audience I do see some people involved also in internet standards. And my question to the audience is, what can we learn in the creation of standards for AI from the internet standardization process? Can I give someone the floor from the audience?

AUDIENCE: Right. Yeah, I can say something about it. My name is Wouter Cobus, I’m with the platform Internet Standards. In my perspective, there’s quite a difference. where the internet itself was built on standards. And I think the standards really formed the internet as we know it right now. Whereas AI, although not my expertise, seems more to me as a technology is out there. And now we’re trying to introduce standards to, well, limit or to control AI in that sense. But it’s not really founded on standards like the internet was. So there is something to learn, but I think there’s, that’s a difference between AI and internet standards in that sense.

Auke Pals: Thank you, Wouter, for sharing your thoughts. What my reflection on that is that indeed AI is not built on standards, but indeed is now being regulated while the threats have been identified or have been more upfront. So now we’re trying to re-engineer the wheel for creating usable standards in certain domains. Is there any other reflection from the audience? Yeah, let’s move to the next slide.

AUDIENCE: I have a question. Connecting to a balancing innovation and control. So I think it’s for you, Elke. Do you think that there is a risk that big tech says no to the EU? And if so, what can be changed to balance our vision in the EU and the vision of big tech?

Auke Pals: Let me think about that. There is indeed a risk that big tech says no to EU and I do think that in not only in AI, but in also other topics, the EU is being challenged by big tech. So your question is what change to balance our vision in the EU beginning at big tech?

Lisa Vermeer: To be honest, I don’t think I have a clear answer on that. Maybe some of my other colleagues do have that. You see this happening because especially I think Meta is at the moment really ramping up against the AI Act and its consequences. For example, the AI Act is regulating large language models or general purpose AI models which comprise large language models with the goal of practice and most large companies from the US have signed up with the AI pact, an initiative from the European Commission to collaborate with companies to become compliant with the AI Act, but we now see that especially Meta but also other companies are replying to this code of practice on GPA models which are the large language models and some are more constructive and others are really saying we don’t want this because this is going to make it very hard for us. So I get this question a lot, especially during the negotiations of the AI Act, basically all countries were asked like how do you see the AI Act as a barrier to innovation? The idea is that the AI Act is not a barrier to the right kind of innovation and because it’s a risk-oriented approach, it means that a lot of AI falls outside of the scope, but to be honest, a lot of AI also falls within the scope, but then the argument is that the EU deliberately choose to have a we want responsible AI to develop and to innovate and to grow and to scale in the EU. And if large companies, for example, from the US, but also from other areas in the world, they are not responsible enough, i.e. not meeting our EU criteria. It’s the kind of AI that we don’t want. So it is a balance. But if the big tech says no to the EU, then the EU says no to the big tech. And it really means, do you want to have access to our market or not? And I think, you see, the upcoming coming of years will be interesting to see how it goes, because we have a new European Commission in Brussels, and they have quite some enforcement power also for the Digital Services Act, which is a large law impacting large platforms. And with the AI Act in almost a year, how will they play their cards? We first had Commissioner Thierry Pouton, who was really individual and forceful towards, for example, Elon Musk. Now you see that the new commissioner has a more conciliatory tone towards the ex-owner. So it really depends on how this, the law is there, but it depends on how it’s going to work out and how forceful the EU is going to stand and develop the fines, etc. But it’s still, it remains to be seen. Thank you for your question.

Auke Pals: Thank you, Lisa, for also answering the question. I saw the hand raised from Karen online. Karen, are you there? I can unmute you.

Karen: Yeah, I’m sorry. I was writing on my concern on the chat. So I think that the difference, for example, with internet is that AI is not limited to give information or grant information. The information that it gives is biased already. Another concern is that it also replicates and improves human traits. It also interprets data, for example, when it’s used on medical devices or neurotechnological devices. It will read this information, it will evaluate it, and then it will interpret this data, and then it will give feedback to this neurotechnology as well. I’m talking about, for example, these electroencephalography devices that will read, then it will interpret, and then it will send signs again to this neurotechnology to either activate or suppress some activity on the brain. It is not regulated from the design, the development, or use. It is not regulated how it will be transnationally moved. I think we have a lot of concerns. It’s a broader concern because it affects many dimensions, many fields. I do think that society does not fully understand the profound impact of using and interacting with AI. Thank you.

Auke Pals: Thank you very much, Karen. Yeah, Juliana, do you want to reply on that?

Juliana Sakai: Yeah, sure. Thank you for your comments. I think that we have been always following what are the technological development and advancements, so to say, and I think this is really the place where civil society plays a big role, like trying to explain what is going on, putting more information available, and and make it, and when I say putting more information, is also like breaking what are these consequences that you just mentioned, right, Karen? So for each kind of use in each field, we have to have like civil society monitoring how the results are going on and how the implementation are going on, how the test of each system is working. And I think that this is something that has to be parallel developed, sometimes with the help of the government. It has to, when we’re talking about a impact assessment, it has to be prior to a launch of a tube, and when it’s launched, civil society may have the information and the data to collect and to analyze what kind of impact and algorithmic bias, for example, some tool is provoking and how it might impact badly on the inequality. So I think that this is pretty much the field that we’ll have to work on for at the end of the day, the civil society, the population, the consumers, the users as a whole have more info on how we should protect ourselves, right? And for this, also the organized civil society and academia and journalists are there to spread the information and support all the advocacy work. And this is really important because at the end of the day, the institutions are accountable once also if the civil society is demanding. So there is a flux. and a kind of civil society demand and the institutions answering to this. So we have to press and demand that the institution takes the action, the real measures to protect and to implement the regulations that are being proposed.

Auke Pals: Thank you very much for your response. I’m getting the sign already that the session is nearly to its end. I would like to give the opportunity for someone to reflect or make any last comment if there is none. I would like to thank my panelists, Lisa, Ananda, Juliana, for being part of the session. I do also really think that much discussion can go on on this topic, but not within the 60 minutes that we’ve received today. With this, I would really encourage you to stay in touch with us through LinkedIn, add us if you need us or want to start a new discussion. With this, I would like to close the session. Thank you very much. Thank you, Juliana and Manon. Thank you. Bye-bye. Thank you very much. Thank you very much. Yeah. It’s over. yeah yeah

L

Lisa Vermeer

Speech speed

152 words per minute

Speech length

2035 words

Speech time

802 seconds

EU AI Act could become de facto global standard for AI governance

Explanation

The EU AI Act sets safety requirements for AI systems entering the EU market. Companies may build one type of AI system meeting EU requirements to sell globally, potentially making it a de facto standard.

Evidence

Example of AI systems for health sector being built to EU standards but benefiting hospitals worldwide

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Ananda Gautam

Auke Pals

Agreed on

Global impact of EU AI Act

Differed with

Ananda Gautam

Differed on

Global impact of EU AI Act

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

Explanation

The EU has a specific regulatory ecosystem for the digital economy that differs from other regions. The product regulation approach of the AI Act may not be easily replicated elsewhere.

Evidence

Mentions existing EU regulations like GDPR and Data Act as context for AI Act

Major Discussion Point

Impact of EU AI Act globally

Differed with

Ananda Gautam

Differed on

Global impact of EU AI Act

A

Ananda Gautam

Speech speed

124 words per minute

Speech length

1000 words

Speech time

482 seconds

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Explanation

The EU AI Act has extraterritorial jurisdiction, meaning it can regulate AI products and services from outside the EU. This could influence AI development globally.

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Lisa Vermeer

Auke Pals

Agreed on

Global impact of EU AI Act

Differed with

Lisa Vermeer

Differed on

Global impact of EU AI Act

Lack of capacity in developing nations to implement or create AI standards

Explanation

Developing nations often lack the technological capacity and expertise to implement AI standards or create their own AI regulations. This creates challenges in global AI governance.

Evidence

Mentions digital divide and lack of internet access for over 2 billion people in Global South

Major Discussion Point

Challenges in global AI standardization

Agreed with

Auke Pals

Agreed on

Challenges in global AI standardization

Potential to leverage AI to address development challenges

Explanation

AI could be used to address development challenges in Global South countries. It could help close the digital divide and empower populations with lower digital literacy.

Evidence

Examples of using AI for medical facilities in rural areas and enhancing education systems

Major Discussion Point

AI challenges and opportunities for developing nations

Need for frameworks to ensure AI upholds human rights globally

Explanation

Global frameworks are needed to ensure AI upholds human rights, especially in developing nations. Existing frameworks like UNESCO’s and OECD’s could be fundamental practices for ensuring human rights in AI.

Evidence

Mentions UNESCO and OECD frameworks as examples

Major Discussion Point

AI challenges and opportunities for developing nations

Importance of accommodating needs of developing nations in global AI governance

Explanation

Global AI governance should accommodate the needs of developing nations. This could help these countries leverage AI for economic and social benefits, unlike during the dotcom boom.

Evidence

Comparison to dotcom boom where developing nations couldn’t leverage internet power like developed nations

Major Discussion Point

AI challenges and opportunities for developing nations

A

Auke Pals

Speech speed

102 words per minute

Speech length

1570 words

Speech time

917 seconds

Big tech companies may resist complying with EU AI Act requirements

Explanation

There is a risk that large technology companies might refuse to comply with EU AI Act requirements. This creates a challenge in balancing EU’s vision for AI governance with the interests of big tech.

Evidence

Mentions Meta ramping up against AI Act and its consequences

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Lisa Vermeer

Ananda Gautam

Agreed on

Global impact of EU AI Act

Fragmentation and overlap between different standardization bodies

Explanation

Multiple standardization bodies are active in creating AI standards, leading to fragmentation and overlap. This creates challenges in establishing coherent global AI standards.

Evidence

Observation of different standardization bodies trying to deal with AI in their own way

Major Discussion Point

Challenges in global AI standardization

Agreed with

Ananda Gautam

Agreed on

Challenges in global AI standardization

Balancing regulation, standardization and innovation

Explanation

There is a need to balance AI regulation and standardization with innovation. Strict regulations might hinder innovative initiatives, especially for small startups.

Evidence

Example of small startups not looking at industry standards first, but creating according to their best practices

Major Discussion Point

Challenges in global AI standardization

Rapidly evolving AI technology landscape makes standardization difficult

Explanation

The AI field is rapidly evolving, making it challenging to create timely and relevant standards. By the time standards are developed, the technology may have already advanced significantly.

Evidence

Example of generative AI becoming significant during final stages of EU AI Act negotiations

Major Discussion Point

Challenges in global AI standardization

Agreed with

Ananda Gautam

Agreed on

Challenges in global AI standardization

J

Juliana Sakai

Speech speed

101 words per minute

Speech length

1231 words

Speech time

729 seconds

Monitoring government use of AI systems

Explanation

Civil society plays a crucial role in monitoring how governments use AI systems. This involves identifying problems in implementation and advocating for institutions to take measures.

Evidence

Example of Brazilian civil society filing complaints against facial recognition systems used without consent

Major Discussion Point

Role of civil society in AI governance

Advocating for transparency and accountability in AI implementation

Explanation

Civil society organizations advocate for transparency in AI implementation by both governments and companies. This allows for assessment of what is working and what isn’t in AI governance.

Evidence

Mentions need for real transparency to assess information from government and companies

Major Discussion Point

Role of civil society in AI governance

Facilitating dialogue between stakeholders on AI governance

Explanation

Civil society plays a role in facilitating dialogue between different stakeholders on AI governance. This includes spreading information and supporting advocacy work.

Evidence

Mentions civil society, academia, and journalists working to spread information and support advocacy

Major Discussion Point

Role of civil society in AI governance

Agreements

Agreement Points

Global impact of EU AI Act

Lisa Vermeer

Ananda Gautam

Auke Pals

EU AI Act could become de facto global standard for AI governance

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Big tech companies may resist complying with EU AI Act requirements

The speakers agree that the EU AI Act has potential for global impact, whether through becoming a de facto standard, influencing products/services outside the EU, or causing resistance from big tech companies.

Challenges in global AI standardization

Ananda Gautam

Auke Pals

Lack of capacity in developing nations to implement or create AI standards

Fragmentation and overlap between different standardization bodies

Rapidly evolving AI technology landscape makes standardization difficult

Both speakers highlight various challenges in creating and implementing global AI standards, including capacity issues in developing nations, fragmentation among standardization bodies, and the rapidly evolving nature of AI technology.

Similar Viewpoints

Both speakers emphasize the importance of responsible AI use and governance, particularly in addressing development challenges and ensuring proper government use of AI systems.

Ananda Gautam

Juliana Sakai

Potential to leverage AI to address development challenges

Monitoring government use of AI systems

Unexpected Consensus

Importance of civil society in AI governance

Ananda Gautam

Juliana Sakai

Need for frameworks to ensure AI upholds human rights globally

Advocating for transparency and accountability in AI implementation

Facilitating dialogue between stakeholders on AI governance

While not unexpected, there was a strong consensus on the crucial role of civil society in AI governance, spanning from developing nations’ perspective to more general global governance issues.

Overall Assessment

Summary

The main areas of agreement include the global impact of the EU AI Act, challenges in global AI standardization, the potential of AI to address development challenges, and the importance of civil society in AI governance.

Consensus level

There is a moderate level of consensus among the speakers, particularly on the challenges and potential impacts of AI governance. This consensus suggests a shared understanding of the complex issues surrounding AI regulation and standardization, which could facilitate more coordinated efforts in addressing these challenges globally.

Differences

Different Viewpoints

Global impact of EU AI Act

Lisa Vermeer

Ananda Gautam

EU AI Act could become de facto global standard for AI governance

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Lisa Vermeer presents both arguments for and against the EU AI Act becoming a global standard, while Ananda Gautam focuses more on its potential influence through extraterritorial jurisdiction, particularly on developing nations.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the global impact of the EU AI Act, the challenges in implementing global AI standards, and the role of developing nations in AI governance.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on certain issues, there is also a significant amount of agreement on the challenges and complexities of global AI governance. This level of disagreement is constructive for the topic at hand, as it highlights the multifaceted nature of AI regulation and the need for diverse perspectives in shaping global AI policies.

Partial Agreements

Partial Agreements

All speakers agree on the challenges of implementing global AI standards, but they focus on different aspects: Lisa on regulatory contexts, Ananda on capacity issues in developing nations, and Auke on balancing regulation with innovation.

Lisa Vermeer

Ananda Gautam

Auke Pals

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

Lack of capacity in developing nations to implement or create AI standards

Balancing regulation, standardization and innovation

Similar Viewpoints

Both speakers emphasize the importance of responsible AI use and governance, particularly in addressing development challenges and ensuring proper government use of AI systems.

Ananda Gautam

Juliana Sakai

Potential to leverage AI to address development challenges

Monitoring government use of AI systems

Takeaways

Key Takeaways

The EU AI Act could potentially become a de facto global standard for AI governance, but may not be suitable for direct replication in other regions

Global AI standardization faces challenges like fragmentation between bodies, balancing regulation and innovation, and rapidly evolving technology

Civil society plays important roles in AI governance including monitoring, advocacy, capacity building, and facilitating dialogue

Developing nations face challenges in AI adoption but also opportunities to leverage AI for development if their needs are accommodated in global governance frameworks

Resolutions and Action Items

None identified

Unresolved Issues

How to effectively enforce broad AI regulations like the EU AI Act

How to balance innovation and control in AI governance, especially with resistance from big tech companies

How to address the lack of quality data and technological capacity for AI in developing nations

How to ensure AI systems uphold human rights globally, especially in military/security domains

Suggested Compromises

Developing AI governance frameworks that can serve as guides for local regulatory frameworks rather than direct replication of EU AI Act

Balancing strict requirements for high-risk AI applications with more flexibility for low-risk innovations

Collaboration between developed and developing nations to build AI capacity while addressing local needs and contexts

Thought Provoking Comments

The EU AIX enforcement promises to be really challenging because the AIX is very broad and it sets rules for different areas, a whole range of areas, basically touching the whole, all industries and all public areas and how do you effectively enforce a law?

speaker

Lisa Vermeer

reason

This comment highlights a critical challenge in implementing AI regulation on a broad scale, raising important questions about practical enforcement.

impact

It shifted the discussion from theoretical benefits of EU AI regulation to practical challenges of implementation and enforcement across diverse sectors.

What we did see in regulation is when the EU AI Act was being developed, generative AI was not that big in the beginning, but later on in the final stages of the negotiation, it became quite big and there was an urge and a need to regulate that.

speaker

Auke Pals

reason

This observation underscores the rapid pace of AI development and the challenge of creating regulations that can keep up with emerging technologies.

impact

It introduced the idea of the dynamic nature of AI technology and the need for flexible, adaptable regulation approaches.

In the context of developing nations and on the context of human rights, so the basic fundamental of AI system that we talk is about the quality of the data and the bias of the data.

speaker

Ananda Gautam

reason

This comment brings attention to the often overlooked challenges faced by developing nations in AI development and regulation, particularly regarding data quality and bias.

impact

It broadened the discussion to include global perspectives and highlighted the potential for AI to exacerbate existing inequalities.

AI is not limited to give information or grant information. The information that it gives is biased already. Another concern is that it also replicates and improves human traits. It also interprets data, for example, when it’s used on medical devices or neurotechnological devices.

speaker

Karen

reason

This comment raises complex ethical and practical concerns about AI’s ability to interpret and influence human behavior, particularly in sensitive areas like healthcare.

impact

It deepened the conversation by introducing more nuanced concerns about AI’s societal impact beyond just information provision, touching on issues of autonomy and medical ethics.

Overall Assessment

These key comments shaped the discussion by broadening its scope from initial focus on EU regulation to encompass global perspectives, practical implementation challenges, and deeper ethical considerations. They highlighted the complexity of AI governance, emphasizing the need for flexible, culturally sensitive approaches that can adapt to rapidly evolving technology while addressing fundamental issues of data quality, bias, and human rights.

Follow-up Questions

How can we effectively enforce broad AI regulations like the EU AI Act?

speaker

Lisa Vermeer

explanation

This is a major challenge for regulators and policymakers, as the broad scope of the AI Act makes oversight and enforcement complex.

How can we balance AI regulation, standardization, and innovation, particularly for small startups?

speaker

Auke Pals

explanation

There’s a need to find ways to regulate AI without hindering innovation, especially for smaller companies with limited resources.

How can developing nations build capacity to implement or create their own AI regulations?

speaker

Ananda Gautam

explanation

Many developing countries lack the technical and policy expertise to effectively regulate AI, which could lead to implementation challenges or inadequate protections.

How can AI be leveraged in developing nations to address issues like digital divide, language barriers, and access to education and healthcare?

speaker

Ananda Gautam

explanation

There’s potential for AI to help solve development challenges, but this requires careful consideration of local contexts and needs.

How can we ensure AI systems are trained on unbiased, high-quality data, particularly in developing nations?

speaker

Ananda Gautam

explanation

The quality and representativeness of training data is crucial for creating fair and effective AI systems, but this is particularly challenging in contexts with limited data infrastructure.

How can the international community address the use of AI in military contexts, given that this is often excluded from civilian AI regulations?

speaker

Lisa Vermeer

explanation

The use of AI in military applications raises significant ethical and security concerns that aren’t addressed by regulations like the EU AI Act.

How will the relationship between big tech companies and EU regulators evolve with the implementation of the AI Act?

speaker

Audience member (unnamed)

explanation

There’s tension between tech companies’ desire for innovation and the EU’s regulatory approach, which could impact the development and deployment of AI technologies.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #148 Making the Internet greener and more sustainable

WS #148 Making the Internet greener and more sustainable

Session at a Glance

Summary

This workshop focused on the concept of a greener Internet and the roles of various stakeholders in promoting sustainability in digital infrastructure. The discussion began with an introduction to the energy costs associated with Internet usage and the need for more efficient practices. Participants from academia, industry, and civil society shared their perspectives on three main policy questions.

The first question addressed the duties of stakeholders in creating a greener Internet. Speakers emphasized the importance of collaboration, energy-efficient practices, government regulations, and user awareness. The second question explored how sustainability efforts affect access for new users. Participants noted both positive and negative impacts, highlighting the potential for lower operational costs but also the challenges of initial investments in sustainable technologies.

The third question focused on spreading awareness and adoption of green standards. Speakers suggested strategies such as education campaigns, incentive programs, community involvement, and partnerships between stakeholders. Throughout the discussion, common themes emerged, including the need for collaboration, the importance of balancing sustainability with affordability, and the role of government policies in driving change.

The workshop also touched on specific issues such as the challenges faced by underserved communities, the potential of IPv6 in reducing energy consumption, and the need for a holistic approach to sustainability in the digital ecosystem. Participants agreed that while progress towards a greener Internet presents challenges, it is essential for the long-term sustainability of digital infrastructure and addressing climate change.

Keypoints

Major discussion points:

– The duties and roles of different stakeholders (ISPs, governments, academia, end users) in creating a greener internet

– Challenges in implementing sustainable internet infrastructure, especially in developing regions

– How sustainability efforts impact access for new users, including potential positive and negative effects

– Ways to spread awareness and adoption of green standards for internet infrastructure

The overall purpose of the discussion was to raise awareness about the need for more sustainable internet infrastructure and practices, gather perspectives from different stakeholders, and encourage collaboration to promote “greener” internet development.

The tone of the discussion was generally constructive and solution-oriented. Speakers approached the topic seriously but with optimism about potential for positive change. There was an emphasis on the need for collaboration between different stakeholders. The tone became slightly rushed towards the end as the moderator tried to fit in all the planned questions within the time constraints.

Speakers

– Lucas Jorge da Silva: Moderator

– Nathalia Sautchuk Patricio: PhD candidate and researcher, assistant at Karsua University of Applied Sciences in Germany

– Jeffrey Llanto: Pioneer in the Philippine IT industry, instrumental in the country’s first internet connection in 1994, co-founder of CVISnet

– Tiago Jun Nakamura: Project analyst at NIC.br, organizer

– Eunice Perez Coello: Technical collaborator and network administration at Coprel Telecom, specializing in high-performance network design with a master’s in applied computing

– Pedro Camara: ISP specialist

Additional speakers:

– Eduardo Barasal Morales: Coordinator of training at NIC.br, moderator

– Natalia de Souza Rufino: Member of the Youth Brazil Program, reporter

– Jimson Olufuye: Consultant on data center digitalization, from Africa City Alliance, Abuja, Nigeria

Full session report

Expanded Summary: Workshop on Creating a Greener Internet

Introduction:

This workshop explored the concept of a greener Internet and the roles of various stakeholders in promoting sustainability in digital infrastructure. The discussion brought together experts from academia, industry, and civil society to address three main policy questions, sharing diverse perspectives on the challenges and opportunities in creating a more sustainable digital ecosystem.

Key Participants:

– Nathalia Sautchuk Patricio: PhD candidate and researcher in Internet Governance

– Eunice Perez Coello: Technical collaborator specializing in high-performance network design

– Jeffrey Llanto: Pioneer in the Philippine IT industry and advocate for community empowerment

– Pedro Camara: ISP specialist with expertise in network infrastructure

– Lucas Jorge da Silva: Speaker and organizer

Policy Questions Addressed:

1. What are the duties of different stakeholders in creating a greener Internet?

2. How do sustainability efforts impact access for new users?

3. How can we spread awareness and adoption of green standards in Internet infrastructure?

Key Discussion Points:

1. Duties of Stakeholders in Creating a Greener Internet:

Nathalia Sautchuk Patricio emphasized the importance of collaboration across sectors and highlighted the role of governments in creating policies and providing incentives for sustainable practices. Eunice Perez Coello stressed that Internet Service Providers (ISPs) must adopt energy-efficient practices. Jeffrey Llanto underscored the need to understand real-world impacts on vulnerable communities, citing examples of climate change effects in the Philippines. Pedro Camara echoed the importance of coordinated efforts among stakeholders to reduce environmental impact.

2. Impact of Sustainability Efforts on Access for New Users:

Nathalia Sautchuk Patricio emphasized the need to balance sustainability with affordability. Jeffrey Llanto stressed the importance of empowering local communities, particularly in underserved areas, providing examples of how internet access has improved lives in remote Philippine villages. Pedro Camara noted that while initial costs may hinder access, there are long-term benefits to sustainable practices. He suggested that optimization and energy-efficient infrastructure could potentially lower operational costs for ISPs, making Internet access more affordable. Eunice Perez Coello pointed out the resource disparities in underserved areas, highlighting the challenges in implementing sustainable solutions in regions lacking basic infrastructure, particularly in Latin America.

3. Spreading Awareness and Adoption of Green Standards:

Eunice Perez Coello emphasized the importance of education and public campaigns. Nathalia Sautchuk Patricio suggested government incentives and certifications as potential drivers for adoption. Jeffrey Llanto highlighted the crucial role of community involvement and ownership in sustainable Internet projects. Pedro Camara advocated for collaboration between stakeholders to amplify efforts in promoting green standards. Lucas Jorge da Silva mentioned the Green Work Group within the IETF as an example of ongoing efforts to promote sustainability in internet infrastructure.

Challenges and Considerations:

Throughout the discussion, several challenges were identified in implementing sustainable Internet infrastructure:

1. Initial costs and affordability, particularly for smaller providers and underserved communities

2. Lack of global consensus on green Internet standards

3. Disparities between regions and communities in terms of resources and existing infrastructure

4. Balancing sustainability efforts with performance and expansion of Internet access, especially in developing regions

patri5. Implementing sustainable solutions in areas lacking basic infrastructure, as highlighted by Eunice Perez Coello’s comments on Latin America

Audience Engagement:

The workshop utilized a quiz platform to gather audience input on key questions, enhancing participant engagement. Time constraints limited the full exploration of audience comments, but some participants shared their perspectives on the challenges and opportunities in creating a greener Internet.

Key Takeaways and Future Directions:

1. Collaboration across different stakeholder groups is crucial for developing a greener Internet.

2. There is a need to balance sustainability efforts with ensuring affordable access, especially in developing regions.

3. Education, awareness campaigns, and incentives are important for promoting green practices.

4. Community involvement and empowerment are key, particularly for underserved areas.

5. Regulatory approaches and standards can help drive adoption of green practices.

Unresolved issues include overcoming the initial high costs of transitioning to greener technologies, especially for smaller providers and underserved communities, and achieving a global consensus on green Internet standards.

The workshop concluded with closing remarks by Thiago Jun Nakamura, who summarized the key points and emphasized the importance of continuing the discussion on creating a greener Internet.

In summary, the workshop provided a comprehensive exploration of the challenges and opportunities in creating a greener Internet, emphasizing the need for collaborative, nuanced approaches that consider the diverse needs of different communities and stakeholders.

Session Transcript

Lucas Jorge da Silva: a technical collaborator and network administration at the Coprel Telecom, specializing in high-performance network design with a master’s in applied computing. And we have two representatives of civil society, Ms. Natalia Soutchouk-Patricio, a PhD candidate and researcher, assistant at Karsua, I think it’s pronounced that, University of Applied Sciences in Germany. Germany is difficult to spell. And we have Jeffrey Lento, a pioneer in the Philippine IT industry, instrumental in the country’s first internet connection in 1994, and co-founder of CVISnet. As we can see, we have brought together a specialist from various fields, ensuring diversity, and arranging the workshop with different perspectives. Secondly, I’d like to thank you, my boss, Eduardo Barraza Morales. He is one of the moderators, and he’s the coordinator of training at NIC.br. And one of our organizers, Thiago Jun Nakamura, project analyst at NIC.br, and my work colleague. And last but not least, our reporter, Natalia de Souza Rufino, member of the Youth Brazil Program. Okay, now I’d like to share some slides. Let me share my screen. Let’s see. Okay. Everyone can see the slides? All right. So here are the people who I was talking to. And for the beginning, I’d like to introduce our agenda. The first part will be a brief introduction to set the context for our topic. Then we’ll use a quiz platform to encourage activity participation from the audience. I’d like to thank all of you who are here with me on site. And we use the quiz, and you ask a question, and the people from here and online, you answer with only one word. And this word will guide the discussion and help us identify the most relevant points for the audience. After gathering this answer, our speakers will comment on the policy question. We have three policy questions, and based on their experience and opinion, they will talk about our workshop. We will hold three rounds of this discussion, and finally, we have an open mic session where the audience can ask questions or share comments about the workshop. Here we have the policy questions. The first one, what are the duties of the stakeholders for a greener Internet? Two, how does sustainability affect access for new users? And three, how can we spread our readiness in adoption of green standards? Well, I will do my introduction very quickly, and I’d like to begin with a question. How many aspects of your lives don’t involve the Internet? So we see the Internet has become an intricate part of our lives, but this comes with a significant energy cost to operate. And while the term cloud is often used like a magical word, we know that cloud means Internet, and Internet relies on massive data centres that consume a lot of energy, and sometimes using non-renewable sources for this. A significant part of the Internet, a part of humanity, still lacks the Internet access, and we see that in the open session that one-third of the humanity remains offline. So we need to expand that, and if today the Internet consumes a lot of energy, if we continue to expand the Internet, we will use more resources than we have. So, for this problem, there’s a concept of green networks that minimise the environmental impact of Internet infrastructure while optimising the resources, the use of these resources that we talk about. The goal is to do more with less, using fewer resources without the effective functionality of the Internet, so use less resources and still have the Internet, still have the performance of the Internet, like use the renewable energy in data centres, efficient manufacturing of network components, development of energy-efficient protocols, and the dynamic resource scale. One good initiative that we saw in the last weeks was the Green Work Group, the Gathering for Energy-Efficient Network. It’s a work group within the ITF, the Internet Engineering Task Force, and this group focuses on improving energy-efficient network technologies, creating frameworks, metrics, creating new protocols, or updating new protocols, old protocols, to optimise the energy used in networks. This last meeting of ITF happened last month in Dublin, Ireland, and the discussion was recorded, so you can access the link in the YouTube and watch the session. I think they have like two hours of length, but that is an important initiative from the ITF. So, what is the role of the stakeholders playing in this scenario? Collaboration, I think, is the word key for this, so we can address the critical issues, like what is the problem, and how we can solve this problem. So, it’s crucial that different stakeholders work together to drive this change. That is the main objective of today’s workshop, because we invited some different people to talk about this problem, and some results, or maybe solutions to this problem. So the stakeholders can take like adopted green practices, innovating protocols, and hardware, and balance the sustainability with the performance and expansion. Well, what is the goal of the workshop? While we understand that one hour is not enough to solve all this problem, we want to raise the awareness about this topic, which may be new to some people, and aim to encourage the collaboration, gather new options from different test sectors on the internet, and ultimately promote actions or create new initiatives from here in the IGF from React. So, for that, we like you online or on-site to share your ideas and solutions, and collaborate to build a green internet. So, this is a very quick introduction, and now we use a quiz platform to get some ideas, and after that, you pass the floor for our speakers. So, let me share the quiz platform. Okay. All right. So, I think that everyone can join the quiz. Please enter in your web browser, joinmyquiz.com, and put the pin, it’s 410266. And you have three questions, and we ask for you to respond, reply, with only one word, and we can make a beautiful word cloud. So, everyone on site and online can join the quiz. We have Eduardo Barraza-Morales, join the quiz. So, we have 15 participants. I don’t know how many we have on Zoom. So, I think it’s 15. The number. The camera. All right. So, I will start the quiz. Let’s start the first question. The audience will have one minute to answer, so think well. The first question, in one word, what do you consider essential for moving toward a greener Internet? We have a great collaboration, sustainability, agreement, money, money is important. Will is an important thing. Just one word. Power management is two words. Global warming, energy efficiency. Seven seconds. All right. Time’s up. We have a lot of great ideas, but remember, only one word. I see two words together. I see it’s centric, but all right. And now I like to start with the first policy question, and the speakers can use these words and this question as a base to the presentation. Let me share again the PDF. All right. Here is the first policy question, and I will start in the alphabetical order. So, I like to invite Eunice Perez to talk about this first policy question. What are the duties of stakeholders for a greener Internet?

Eunice Pérez Coello: Thank you. Hello, everyone. I would like to thank you for the invitation, the first time I participate in this great forum, and I’m excited to talk about this very important topic, making the Internet greener and more sustainable in my role as an academic. I think it’s an Internet collaborative ecosystem. Each stakeholder has a unique vehicle pool. Internet service providers and such companions need to adopt energy efficient practice. Governments must enforce eco-friendly regulations. Businesses are expected to align with sustainability standards and end users can reduce their digital burden. Academia connects these efforts that are leading in innovation and education. For example, in the University of the London, I’m doing research of the U.S. reserve installed on Raspberry Pi, where I analyze the energy consumption. Universities integrating sustainable into engineers’ curricula, equipment, future leaders with the skills to address environment challenge. Academia also facilitated the partnership, translating human age research into a collective solution. Together, I think these efforts ensure we progress forward toward a greener Internet.

Lucas Jorge da Silva: All right. Thank you, and it’s a great pleasure to be here with you and for accepting working in our workshop. So, I’d like to invite Mr. Jeffrey Lentol to talk about this first best question. What are the duties of stakeholders for a greener Internet?

Jeffrey Llanto: Good evening, everybody. And from us here in the Philippines right now, it’s really raining very hard, and I think it’s a fact of the global warming. So, anyway, what are the duties of stakeholders for a greener Internet? First, my role here is that I’m more on connecting the community. I have projects that I’ve submitted to NOCAS together with APNIC that I go directly to the communities and real-time scenario on what is happening at the grassroots level. So, we are talking about communities that doesn’t have Internet connection. It doesn’t even have electricity. Much worse, they don’t even have water. So, these are the areas we call them as the GIDA or GIDA, the Geographically Isolated and Disadvantaged Areas. And these areas are very vulnerable, especially during disaster and climate change. So, we’re talking about greener Internet. We need to go first, who are the people who are really affected by climate change. So, there is one area that we’re working with USAID that this school submerged. It’s part of another island in the Philippines. It submerged every high tide. So, we provided them with a satellite Internet. Then I noticed that during high tide, the sea level will go inside the classroom. So, it’s very funny because it’s elevated. The schools, the chairs are being elevated. So, the students are up to there. So, again, duties of stakeholders for me for a greener Internet is first, we need to look at real scenarios. What are happening? What is happening to the real world right now? So, as I mentioned today, that there’s a typhoon incoming to the Philippines. And it’s very unlikely. Just to give you an idea, we named typhoons by alphabets, letter A to letter Z for each year. We ran out of alphabet. That’s why we go back to letter A. So, it’s more than 24 typhoons that we encountered. So, again, greener Internet really needs to know more, especially, in my opinion, especially the forest degradation is very important. And it’s really affecting us here in the islands. So, for me, again, stakeholders, you must know. what is happening on the Al-Sinai. Thank you very much, Lucas.

Lucas Jorge da Silva: Thank you, Geoffrey. It’s important to remember the effect in the global warming and people who are directly affected because of that. Alright? So, I’d like to invite Miss Natalia Sauciuk. Hello, Natalia. Can you hear us? Yes. Hello. Thank you for participating in our workshop. And I’d like to ask you what are the duties of stakeholders for the greener Internet?

Nathalia Sautchuk Patricio: Okay. Thanks for the invitation to participate in this important workshop. Directly, greetings for KAUSHUA in Germany. I can let you know how to spell the name. I tried. I tried. No problem. No problem. This is very common. But anyways, in your quiz I put the wheel. The word wheel because I think everything starts with the wheel. The stakeholders need to have this common goal to go in the direction of a greener Internet. But there is, of course, every stakeholder has their own contributions to this. For example, when we think about network providers, we imagine that they need to implement this in the first way. Like to have really very practical stuff. It’s like to have hardware that is energy efficient and also try to change the way that they use energy like to more renewable energy and think about sustainable operations. This, of course, has some costs especially when you are thinking that nowadays we are not using most of the equipment that we have already in operation. Some of them are not energy efficient. So to move forward in this direction to be more efficient we have some costs. In this way, the duty for example for governments is to help in some way like creating policies in this matter. Like to help or provide some kind of motivation or incentives to move in this direction. The direction to use better solutions in terms of energy. So maybe governments will need to invest and help companies also to move in this direction because it’s not possible to imagine especially in some countries that we know like is the case of Brazil that has a lot of small network providers if they will not have sufficient budget to invest on that in this moment. Or like our friend told us about communities. Those communities also will need some help from governments and investments in this direction. They cannot afford this alone. And how as end users or consumers we can also think about our duty on that. It’s also like trying to manage our usage on internet because we know that when we use more things when it’s not necessary of course we are using resources and this is not about saying that people will not use internet anymore but maybe be a little bit more aware what is the impact of this use in the sustainability of the world as well as we imagine about water for example when we don’t waste water because we know that this is a resource that impacts our life and by the way although internet is not a resource as a water this impacts also in other kinds of resources like water or something like that. This is something that we have to increase the awareness in the users for that. This is some of the things that I would like to point out but there is much more that we could think about the duties of stakeholders in this matter.

Lucas Jorge da Silva: Thank you Natalia and it’s funny that you mentioned the ISPs Internet Service Providers because the next person the next speaker is Pedro Câmara and he is a ISP specialist so Pedro, as a professional in the ISP area what are the duties of the stakeholders for the greener internet?

Pedro Camara: The duties of stakeholders for greener internet involve coordination efforts to reduce environmental impact across the industry ISPs must prioritize energy efficient operations by data centers adopting greener hardware and implementing sustainable practices like equipment recycling and renewable energy adaptation. Governments and regulators play a key role by enacting policies that promote sustainability, providing incentives for using renewable energy and setting backmarks for energy efficiency. Equipment manufacturers need to design eco-friendly hardware with lower energy consumption longer lifespan and recyclable materials. Meanwhile, businesses and consumers should adopt sustainable usage habits so as to reduce unnecessary traffic and responsibly recycle devices. Collaboration across all these groups is essential to build a greener more sustainable internet.

Lucas Jorge da Silva: Thank you, Pedro. Let’s get the next question in our word cloud. Let me go out from the zoom and share the quiz. What is the quiz? Ah, here. Next question. Remember? One word. In your opinion, what is the biggest challenge for sustainable internet infrastructure? Thank you. Talking about the duties of stakeholder. Now, what is the biggest challenge to sustainable internet infrastructure? Once again, we have money, fuel, disparity, costs, infrastructure, disparity, costs, regulation, relevance. Oh, now it’s much better with only one word. 10 seconds. Three, two, one. We have a lot of interesting words. like policies, regulation, government, disparity, obsolescence is important, money, costs, food, power, energy, scale, standards, and relevance. So now I will do the inverted order, I will start with Pedro. So let’s get the next policy question, let me share. The next policy question is how do sustainability efforts affect access for new users? As Pedro is an ESP, so what do you want, what do you think about that? How does it affect the new users, like new users in your ESP, how does it affect it?

Pedro Camara: Okay, sustainability efforts can positively and negatively impact access for new users, depending on how they are implemented. On one hand, optimization, energy usage, and infrastructure can lower operation cost for ESPs, potentially reducing service costs and making internet access more affordable. Efforts to extend the lifecycle of devices through recycle can also create more affordable options for users in underserved areas. However, there can be challenges, such as initial cost of transition to greater technology, which might lead to higher prices or slower expansion into a new region. To ensure sustainability doesn’t hinder access, stakeholders must balance green initiativity with affordability, priority scalable solution, and promotion partnership that expands sustainable connectivity to under-connected areas.

Lucas Jorge da Silva: Thank you, Pedro. One important thing that I want to highlight is the use of IPv6, and IPv6 wasn’t designed to sustainability, but when we use IPv6 and reduce the use of CGNet, for example, in an internet service provider, the cost and consumption of energy is lower, so you are saving money and saving the use of energy. So the main idea of IPv6 is not to be a green protocol, but with the use, the consequence is less use of energy, and this is good for the environment. So now I’d like to ask Natalia, with your years of experience, what are your thoughts about how do sustainability efforts affect access for new users?

Nathalia Sautchuk Patricio: Okay, first of all, I would agree very much with what Pedro was telling about the positive and negative aspects, I think he put very well, that we have a kind of trade-off between sustainability and the affordability of the internet infrastructure, especially when we are talking about developing countries, such as in the South Global, like Brazil and Africa and other countries like that, because in the beginning, implementing those new devices with sustainable or energy-efficient devices would raise the costs, because we have to change whatever we are using now for these new devices, and this will cost money, for sure, and this will come from, needs to come from somewhere, yeah, and this is something that is a kind of a difficult trade-off, because we know that in the long term, these costs will be paid by having less consumption of energy, and also to affect less the world in general, but in the beginning, it’s very difficult, because we have to change this, and you know that this is something that a lot of companies, and especially the smallest ones will be very much impacted by that, because they don’t have so much money in hand to invest on that, or such as also communities that they are doing like their own networks for communities, and they don’t have this money so easily to do this, and nowadays, yet, these kind of devices are kind of not so much available everywhere, so we can see, for example, in Europe, they have scales of energy efficiency for various devices, and people try or tend to buy the ones more efficient, but until you change the whole ecosystem, this will take a very long time for that, so, but to summarize the influence, I think it’s that in the long term, it’s good for the whole society, because we need a more sustainable world, it’s like about surviving in this world, but the negative aspect is that maybe would increase the barriers to these new users, especially in low-income regions, so that’s my view about that.

Lucas Jorge da Silva: Thank you, Natalia, and I’d like to move on Jeffrey, and considering the project that you work, and you mentioned earlier, I believe you have some very fascinating opinions about the access for new users, so Jeffrey, how do sustainability efforts affect the access for these new users?

Jeffrey Llanto: Yes, Lucas, this is very appropriate right now, one word that I can answer on policy question number two, it’s empowerment. Empowerment is very important to the communities, first and foremost, nothing bad against the ISPs, they will never set up an infrastructure where there’s no return of investment, right, so we have some eight pilot areas right now working with the Philippine government and a group called Unconnected.org that provides a satellite connection to underserved areas, as I mentioned that even though there’s a project that we call as ILET, just like a small island, it’s the Internet for Sustainable Livelihood Education and Tourism, so it’s an empowerment of the communities that they will be able first to operate, sustain, and generate funds from the communities, so technology is given that there’s technology, and again, how do you sustain the Internet connection on the community level, so we’re talking about residents who really doesn’t have the smartphones, they don’t have the laptop, let me give you an example on an island with a population of about 1,200 people, it’s a small island, they don’t have water, they don’t have electricity, when we provided them with the first Internet connection, a broadband connection on their island, we never expected that the people were so happy, when there was already Internet connection, it pampers different opportunities, a month later, we noticed that there was already a coin-operated internet in which the residents will put coins for how many minutes of internet connection. So innovation, again, it developed into some kind of fruit bearing. Then all of a sudden, one of the communities and the house, he bought a smartphone, I mean a smart TV, then he operated a community-based movie theater, then they subscribed to Netflix. So those are the things that really need to be focused, especially for countries like the Philippines. The Philippines, if you take note, it is one of the most expensive internet connections around the world and also the slowest. It’s because there’s only two big telecommunications can run internet. And again, it’s very timely, as I mentioned to you, Lucas, because tomorrow the CNET already have a new bill to cut off this kind of monopoly or duopoly that we call it. So tomorrow there’s a CNET and we are also one of the strong promoter that this bill will be passed so that other players, just like Cbusiness Foundation that I started way back in 1994, we used to be an internet service provider. But when business comes in, it becomes like a, they call it a duopoly, only the giant telecoms are running it. I’m not saying it’s bad, but it never really trickles down to the community. For the past 30 years, from the start of the internet in the Philippines, I was there, I was teaching people how to use the internet. Until now, 30 years later, I’m still doing that and working to unconnected islands. So it’s really my passion. So again, going back to policy question, it’s really empowerment. You need to put the people in charge so that they can operate it by themselves. So thank you.

Lucas Jorge da Silva: Thank you very much, Jeffrey. And the last one, Eunice, what is your perspective to how the sustainability efforts affect access for new users?

Eunice Pérez Coello: I want to start to say I agree with my colleagues. Moving towards a sustainability, sustainable internet infrastructure is not without challenge. First, there’s the issues of resource disparities, particularly in regions like Latin America, where the rural and underserved communities struggle to access basic connectivity, let alone eco-friendly solutions. I think another is the lack of the global consensus of green internet standards. It could create fragmentation and delay in progress and limit knowledge and expertise in implementing sustainable technologies for complicating this effort. This challenge directly influences access. Sustainable infrastructure can be costly, potentially delaying the connection of new users and widening the digital divide. I think in the academy, we must step up and not just with innovative research, but also through direct engagement with affected communities by maybe conducting regional studies and involving local voices. We can ensure solutions and could be inclusive and effective. Moving is complex, talking about moving towards sustainability in Latin America. For the diverse geographic make, it’s hard to ensure eco-friendly connectivity everywhere. Academia can help, I think, by designing a cost-effective solution and involve communities in decision-making. Maybe I could say an example. A partnership between universities, observation, and rural cooperatives in Mexico has shown how academia can bring practical insights to underserved areas. I think one of the challenges is the cost, as my colleagues say.

Lucas Jorge da Silva: All right. Thank you for the insights. We have little time, so we have to rush a little bit. Now, we will do the last question to the audience. Let me share the quiz. Now, the last question. What do you believe is essential for promoting green practices in the internet infrastructure? Training, education, multi-stakeholderism, knowledge, money. Money always shows in the answers. Responsibility, partnership, studies, collaboration. Time is up, so these are our words. Now, I will pass for the next policy question. Because we have a tight time now, I would like to ask for our speakers to be very briefly, like one minute, to answer this question, if possible. How can we spread our readiness adoption of green standards? And I will… In energy monetary projects, what is the best way to spread our readiness and adoption of green standards? No? Yes.

Eunice Pérez Coello: I don’t hear. Lucas, is it for me to start? Yes, yes. Sorry. Okay. In one minute. Strategies, I think, is education, principally. Maybe course and public campaigns and another in collaborations. Okay, I say three. Education, collaboration, and incentives. And these three strategies must create these adaptations. And I think the road to green requires all of this. Academia can lead awareness campaigns and collaboration among the stakeholders is the key to the overcoming regional challenge. And we must also provide practical incentives, such as seeds for adopting sustainable practice and certifications, maybe for meeting a green benchmark.

Lucas Jorge da Silva: Okay, thank you. Now, let’s hear about this with Natalia as a research. How do you think we can give more spread for our awareness and adoption of this standard? One minute.

Nathalia Sautchuk Patricio: Wow. One minute. I think, wow, this is difficult. I think education campaign is one thing that is important, but also we have to make some kind of incentives program, especially by the government side to promote this among companies and also the community. And also standard certifications, this kind of stuff all can help to go in this direction. This is my one minute. Wow.

Lucas Jorge da Silva: Thank you. Now, Jeffrey, one minute. How can we spread awareness and adoption of green standards?

Jeffrey Llanto: Yeah, I think we can spread awareness by involving the communities, giving them ownership of what the adoption of green standards is. So first, it’s very important that this will really trickle down to the people who are affected by the green standards. And again, going back, green standards really address the future of especially climate change. So again, awareness and especially for the ownership of the community that they need to have it run and be able to sustain those kind of not only the technology, but also the real system, the infrastructure and how it is being brought to the community level. That’s it.

Lucas Jorge da Silva: Thank you, Jeff. Thank you, Jeffrey. And finally, Pedro, how can we spread awareness and adoption of green standards very quickly? One minute. OK.

Pedro Camara: OK. Spreading awareness and adoption of green standards require collaboration, education and green initiatives. Stakeholders like ISPN, governments and organizations must lead by example, implementing green practices and sharing social stories to demonstrate their benefits. Public campaigns can highlight the importance of sustainable internet practices targeting both business and customers. Governments and industry bodies can establish certification and recognition programs for green compliance in current widespread adoption training programs for IT professionals can ensure that green standards are understood and applied in network design and operation. Lastly, fostering partnership between stakeholders such as ISPN and environment organizations can amplify outreach efforts and make green standards a shared priority across the industry.

Lucas Jorge da Silva: Thank you, Pedro. And I appreciate a lot of the speakers and the time passed fast, but it was a pleasure to be here, to share the table with all these professionals. I think that we can learn a lot about the infrastructure, the green standards. And I want to continue this conversation in an email or in another opportunity. And I’d like to invite Tiago to provide a very quick closing remarks. Thank you all. I apologize for the rush in the end and for any nervous. It was my first time and I’m very grateful to share the table with some special members of the community. Thank you.

Tiago Jun Nakamura Nakamura: Thank you, Lucas. As we come to the close of this workshop, I want to take a moment to express my heartfelt thanks to all of you, to our incredible speakers. Thank you for sharing your expertise. Your insights and your time, your contributions has sparked meaningful discussions and left us with valuable knowledge to carry forward. And to our audience, thank you for your participation. Your presence here today reflects a commitment to growth and learning. It’s what makes the events like this truly impactful. This workshop is a testament to what we can achieve when knowledge is shared and connections are made. Let’s take the ideas we’ve explored today and continue to collaborate and innovate together. It is clear that further stakeholder discussions are needed towards green networks and we hope that everyone here participates on this journey during development over the next years. Thank you all and we look forward to seeing you at the future workshops here.

Lucas Jorge da Silva: Thank you, Tiago. We have three times, three minutes, if anyone here in the audience can make a statement or a comment. What is your name and where are you from? Thank you very much.

Audience: My name is Jimson Olufuye from Africa City Alliance, Abuja, Nigeria. I work as a consultant going to data center digitalization. It’s a great workshop, excellent, but I just have one comment to make that could fast-track the process of using green. If you look at the example of how IPv6 has been adopted, maybe in France, they brought about smart regulation. So they required ISPs, new ISPs, to deploy, to use IPv6 by social superior. So in the same way, our regulators can easily fast-track the process. If it’s smart regulation, you need to have this green equipment and so on, give them a timeline. That will help you a lot. Thank you.

Lucas Jorge da Silva: Thank you. I like when people talk about IPv6, it’s always a pleasure. Anyone from the audience want to talk about where you’re from, what’s your name?

Audience: Thank you. My name is Mariana. I’m from Brazil. Actually, I had a question, or maybe some suggestion. I guess that sustainability is a theme in the concern of the decades, and the influence of the digital era on energy consumption needs to be considered. And when we are talking about the infrastructure of the internet, it’s interesting to consider that in a broad way. I mean, we are talking about the infrastructure of the internet to have all the data transported to the country for the other country, but we need a challenge. Okay, I have a problem with the audio, but it’s interesting to address our consumption to maintain, to treat, and to share all the data that we are using now with the AI models, or we have the powerful computational power, too. And what I want to know is when we are using here the greener internet, we are just talking about the infrastructure to the providers, or we are addressing all these many full ecosystems?

Lucas Jorge da Silva: Thank you. And I’d like to thank you all, the speakers, once again, the members here in the table, and the audience. And I have to close, because we don’t have much time. Thank you all, and we’ll see you at another opportunity in the future. Bye.

N

Nathalia Sautchuk Patricio

Speech speed

106 words per minute

Speech length

917 words

Speech time

516 seconds

Collaboration across sectors is key

Explanation

Nathalia emphasizes the importance of collaboration between different stakeholders to achieve a greener Internet. She suggests that each sector has its own contributions to make, but working together is essential for progress.

Evidence

She mentions the need for governments to create policies and provide incentives, while network providers implement energy-efficient hardware and sustainable operations.

Major Discussion Point

Duties of stakeholders for a greener Internet

Agreed with

Pedro Camara

Eunice Perez Coello

Agreed on

Collaboration among stakeholders is crucial for a greener Internet

Differed with

Jeffrey Llanto

Differed on

Approach to implementing sustainable practices

E

Eunice Pérez Coello

Speech speed

88 words per minute

Speech length

496 words

Speech time

336 seconds

ISPs must adopt energy efficient practices

Explanation

Eunice highlights the role of Internet Service Providers in adopting energy-efficient practices. She emphasizes that ISPs need to prioritize sustainability in their operations to contribute to a greener Internet.

Major Discussion Point

Duties of stakeholders for a greener Internet

Agreed with

Nathalia Sautchuk Patricio

Pedro Camara

Agreed on

Collaboration among stakeholders is crucial for a greener Internet

Education and public campaigns are essential

Explanation

Eunice stresses the importance of education and public campaigns in spreading awareness about green standards. She believes that these strategies are crucial for promoting the adoption of sustainable practices in Internet infrastructure.

Major Discussion Point

Spreading awareness and adoption of green standards

Agreed with

Pedro Camara

Agreed on

Education and awareness are important for promoting green standards

Lack of global consensus on standards

Explanation

Eunice points out that the absence of a global consensus on green Internet standards is a significant challenge. This lack of agreement can lead to fragmentation and delays in progress towards sustainable Internet infrastructure.

Evidence

She mentions that this challenge could limit knowledge and expertise in implementing sustainable technologies.

Major Discussion Point

Challenges for sustainable Internet infrastructure

J

Jeffrey Llanto

Speech speed

127 words per minute

Speech length

1000 words

Speech time

471 seconds

Need to understand real-world impacts on vulnerable communities

Explanation

Jeffrey emphasizes the importance of understanding how climate change and Internet infrastructure affect vulnerable communities. He argues that stakeholders must consider the real-world scenarios and challenges faced by these communities.

Evidence

He provides examples of communities without electricity or water, and schools affected by rising sea levels.

Major Discussion Point

Duties of stakeholders for a greener Internet

Empowerment of local communities is crucial

Explanation

Jeffrey stresses the importance of empowering local communities in the adoption of sustainable Internet practices. He argues that giving communities ownership and control over their Internet infrastructure is essential for sustainability.

Evidence

He mentions pilot projects providing satellite connections to underserved areas and how communities have developed innovative uses for Internet access.

Major Discussion Point

Sustainability efforts and access for new users

Differed with

Nathalia Sautchuk Patricio

Differed on

Approach to implementing sustainable practices

Community involvement and ownership is important

Explanation

Jeffrey reiterates the importance of involving communities in the adoption of green standards. He believes that giving communities ownership of the process is crucial for successful implementation and sustainability.

Major Discussion Point

Spreading awareness and adoption of green standards

Disparities between regions and communities

Explanation

Jeffrey highlights the disparities between different regions and communities in terms of Internet access and infrastructure. He points out that these disparities pose significant challenges for implementing sustainable Internet practices uniformly.

Evidence

He mentions the existence of unconnected islands and communities without basic infrastructure like water and electricity.

Major Discussion Point

Challenges for sustainable Internet infrastructure

P

Pedro Camara

Speech speed

65 words per minute

Speech length

340 words

Speech time

313 seconds

Stakeholders must coordinate efforts to reduce environmental impact

Explanation

Pedro emphasizes the need for coordinated efforts among various stakeholders to reduce the environmental impact of the Internet. He argues that different groups must work together to implement sustainable practices across the industry.

Evidence

He mentions specific actions for different stakeholders, such as ISPs adopting energy-efficient operations, governments enacting sustainability policies, and equipment manufacturers designing eco-friendly hardware.

Major Discussion Point

Duties of stakeholders for a greener Internet

Agreed with

Nathalia Sautchuk Patricio

Eunice Perez Coello

Agreed on

Collaboration among stakeholders is crucial for a greener Internet

Initial costs may hinder access but long-term benefits exist

Explanation

Pedro acknowledges that the initial costs of implementing sustainable technologies may hinder access for new users. However, he also points out that there are long-term benefits to these efforts, such as potential cost reductions and improved affordability.

Evidence

He mentions that optimization of energy usage can lower operational costs for ISPs, potentially making services more affordable in the long run.

Major Discussion Point

Sustainability efforts and access for new users

Collaboration between stakeholders to amplify efforts

Explanation

Pedro emphasizes the importance of collaboration between different stakeholders to spread awareness and adoption of green standards. He suggests that partnerships between various groups can amplify outreach efforts and make green standards a shared priority.

Evidence

He mentions the potential for partnerships between ISPs and environmental organizations to enhance outreach efforts.

Major Discussion Point

Spreading awareness and adoption of green standards

Agreed with

Eunice Perez Coello

Agreed on

Education and awareness are important for promoting green standards

Initial costs and affordability

Explanation

Pedro identifies initial costs and affordability as significant challenges for implementing sustainable Internet infrastructure. He recognizes that the upfront expenses of transitioning to greener technologies can be a barrier, especially for smaller providers or underserved areas.

Major Discussion Point

Challenges for sustainable Internet infrastructure

L

Lucas Jorge da Silva

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Balancing sustainability with performance and expansion

Explanation

Lucas highlights the challenge of balancing sustainability efforts with the need for Internet performance and expansion. He suggests that the goal is to do more with less, using fewer resources without affecting the functionality of the Internet.

Evidence

He mentions the concept of green networks that aim to minimize environmental impact while optimizing resource use.

Major Discussion Point

Challenges for sustainable Internet infrastructure

Agreements

Agreement Points

Collaboration among stakeholders is crucial for a greener Internet

Nathalia Sautchuk Patricio

Pedro Camara

Eunice Perez Coello

Collaboration across sectors is key

Stakeholders must coordinate efforts to reduce environmental impact

ISPs must adopt energy efficient practices

The speakers agree that collaboration between different stakeholders, including ISPs, governments, and other sectors, is essential for achieving a greener Internet and implementing sustainable practices.

Education and awareness are important for promoting green standards

Eunice Perez Coello

Pedro Camara

Education and public campaigns are essential

Collaboration between stakeholders to amplify efforts

Both speakers emphasize the importance of education, public campaigns, and collaboration to spread awareness and promote the adoption of green standards in Internet infrastructure.

Similar Viewpoints

Both speakers recognize the challenges faced by communities in adopting sustainable Internet practices, particularly in terms of costs and access. They emphasize the importance of empowering local communities and considering long-term benefits despite initial challenges.

Jeffrey Llanto

Pedro Camara

Empowerment of local communities is crucial

Initial costs may hinder access but long-term benefits exist

Unexpected Consensus

Importance of considering real-world impacts on vulnerable communities

Jeffrey Llanto

Eunice Perez Coello

Need to understand real-world impacts on vulnerable communities

Lack of global consensus on standards

While coming from different perspectives, both speakers highlight the importance of considering real-world impacts and challenges faced by vulnerable communities in implementing green Internet standards. This unexpected consensus underscores the need for a more inclusive approach to sustainability efforts.

Overall Assessment

Summary

The main areas of agreement include the importance of collaboration among stakeholders, the need for education and awareness campaigns, and the recognition of challenges faced by vulnerable communities in adopting sustainable Internet practices.

Consensus level

There is a moderate level of consensus among the speakers on the key issues. This consensus suggests a shared understanding of the challenges and potential solutions for implementing greener Internet infrastructure. However, there are also nuanced differences in approaches and emphasis, particularly regarding the balance between sustainability efforts and ensuring access for new users. This level of consensus implies that there is a solid foundation for further discussions and collaborative efforts towards achieving a greener Internet, but also highlights the need for continued dialogue to address specific challenges and implementation strategies.

Differences

Different Viewpoints

Approach to implementing sustainable practices

Nathalia Sautchuk Patricio

Jeffrey Llanto

Collaboration across sectors is key

Empowerment of local communities is crucial

While Nathalia emphasizes cross-sector collaboration, Jeffrey focuses more on empowering local communities as the primary approach to implementing sustainable practices.

Unexpected Differences

Focus on vulnerable communities

Jeffrey Llanto

Other speakers

Need to understand real-world impacts on vulnerable communities

Disparities between regions and communities

Jeffrey’s strong emphasis on understanding and addressing the needs of vulnerable communities, particularly those without basic infrastructure, was not as prominently featured in other speakers’ arguments. This highlights an unexpected difference in prioritizing the most disadvantaged populations in the context of sustainable Internet practices.

Overall Assessment

summary

The main areas of disagreement revolve around the primary approach to implementing sustainable practices (collaboration vs. community empowerment), the specific roles of different stakeholders, and the degree of focus on vulnerable communities.

difference_level

The level of disagreement among the speakers is moderate. While there are some differences in approach and emphasis, there is a general consensus on the importance of sustainable practices and the need for various stakeholders to be involved. These differences in perspective could actually be beneficial in developing a more comprehensive and nuanced approach to creating a greener Internet, as they highlight different aspects of the challenge that need to be addressed.

Partial Agreements

Partial Agreements

Both speakers agree on the need for ISPs to adopt sustainable practices, but Pedro emphasizes a broader coordination among all stakeholders, while Eunice focuses specifically on ISPs’ role.

Eunice Perez Coello

Pedro Camara

ISPs must adopt energy efficient practices

Stakeholders must coordinate efforts to reduce environmental impact

Both speakers acknowledge the challenge of initial costs in implementing sustainable practices, but Pedro more explicitly highlights the long-term benefits that could offset these costs.

Nathalia Sautchuk Patricio

Pedro Camara

Initial costs may hinder access but long-term benefits exist

Initial costs and affordability

Similar Viewpoints

Both speakers recognize the challenges faced by communities in adopting sustainable Internet practices, particularly in terms of costs and access. They emphasize the importance of empowering local communities and considering long-term benefits despite initial challenges.

Jeffrey Llanto

Pedro Camara

Empowerment of local communities is crucial

Initial costs may hinder access but long-term benefits exist

Takeaways

Key Takeaways

Collaboration across different stakeholder groups is crucial for developing a greener Internet

There is a need to balance sustainability efforts with ensuring affordable access, especially in developing regions

Education, awareness campaigns, and incentives are important for promoting green practices

Community involvement and empowerment is key, particularly for underserved areas

Initial costs of transitioning to greener technologies pose challenges, but there are long-term benefits

Regulatory approaches and standards can help drive adoption of green practices

Resolutions and Action Items

Continue stakeholder discussions on green networks in future workshops and events

Explore partnerships between academia, industry, and communities to develop sustainable solutions

Consider regulatory approaches to incentivize adoption of green standards by ISPs

Unresolved Issues

How to overcome the initial high costs of transitioning to greener technologies, especially for smaller providers and underserved communities

Lack of global consensus on green Internet standards

How to effectively balance sustainability efforts with expanding Internet access in developing regions

Scope of ‘green Internet’ – whether it encompasses just infrastructure or the broader ecosystem including data centers and end-user practices

Suggested Compromises

Balancing green initiatives with affordability through government incentives and gradual implementation

Involving local communities in sustainable Internet projects to ensure solutions are practical and effective

Combining education efforts with practical incentives and certifications to promote adoption of green practices

Thought Provoking Comments

We need to go first, who are the people who are really affected by climate change. So, there is one area that we’re working with USAID that this school submerged. It’s part of another island in the Philippines. It submerged every high tide.

speaker

Jeffrey Llanto

reason

This comment brought a concrete, real-world example of climate change impacts to the discussion, highlighting the urgency of the issue.

impact

It shifted the conversation from abstract concepts to tangible effects, emphasizing the immediate need for action and the human cost of inaction.

The duty for example for governments is to help in some way like creating policies in this matter. Like to help or provide some kind of motivation or incentives to move in this direction.

speaker

Nathalia Sautchuk Patricio

reason

This insight highlighted the crucial role of government policy in driving sustainable practices.

impact

It broadened the discussion from technical solutions to include policy and economic incentives as key factors in promoting green internet practices.

Sustainability efforts can positively and negatively impact access for new users, depending on how they are implemented. On one hand, optimization, energy usage, and infrastructure can lower operation cost for ESPs, potentially reducing service costs and making internet access more affordable.

speaker

Pedro Camara

reason

This comment introduced nuance to the discussion by pointing out both positive and negative potential impacts of sustainability efforts.

impact

It deepened the analysis by encouraging consideration of the complex trade-offs involved in implementing green practices.

Empowerment is very important to the communities, first and foremost, nothing bad against the ISPs, they will never set up an infrastructure where there’s no return of investment, right, so we have some eight pilot areas right now working with the Philippine government and a group called Unconnected.org that provides a satellite connection to underserved areas

speaker

Jeffrey Llanto

reason

This insight brought attention to the importance of community empowerment and alternative models for providing internet access.

impact

It shifted the discussion towards considering innovative, community-centered approaches to sustainable internet infrastructure.

Academia can help, I think, by designing a cost-effective solution and involve communities in decision-making. Maybe I could say an example. A partnership between universities, observation, and rural cooperatives in Mexico has shown how academia can bring practical insights to underserved areas.

speaker

Eunice Perez Coello

reason

This comment highlighted the potential role of academia in bridging the gap between research and practical implementation.

impact

It expanded the conversation to include the importance of collaboration between different sectors (academia, communities, cooperatives) in developing sustainable solutions.

Overall Assessment

These key comments shaped the discussion by grounding it in real-world examples, highlighting the complexity of implementing green practices, emphasizing the importance of policy and economic incentives, and showcasing the potential for innovative, community-centered approaches. The conversation evolved from a general discussion about sustainability to a more nuanced exploration of the challenges and opportunities in different contexts, particularly in developing countries and underserved areas. The comments also broadened the scope of stakeholders considered, emphasizing the roles of governments, communities, academia, and industry in creating a more sustainable internet infrastructure.

Follow-up Questions

How can we balance sustainability efforts with affordability and access for underserved communities?

speaker

Nathalia Sautchuk Patricio

explanation

This is important to ensure sustainability doesn’t widen the digital divide

What incentives or policies can governments implement to help ISPs and communities transition to more energy-efficient technologies?

speaker

Nathalia Sautchuk Patricio

explanation

This addresses the challenge of high initial costs for implementing sustainable infrastructure

How can community empowerment models be scaled to improve sustainable internet access in underserved areas?

speaker

Jeffrey Llanto

explanation

This approach could help overcome barriers to connectivity in remote or low-income regions

What role can academia play in developing cost-effective, sustainable connectivity solutions for diverse geographic regions?

speaker

Eunice Perez Coello

explanation

This could help address the challenge of implementing eco-friendly connectivity in varied landscapes

How can we create and implement global consensus on green internet standards?

speaker

Eunice Perez Coello

explanation

This is crucial to avoid fragmentation and ensure consistent progress in sustainable internet infrastructure

What are effective strategies for education and public awareness campaigns about sustainable internet practices?

speaker

Multiple speakers

explanation

This was identified as a key factor in promoting adoption of green standards

How can we develop and implement certification programs for green compliance in the internet industry?

speaker

Pedro Camara

explanation

This could incentivize and recognize adoption of sustainable practices

What is the energy impact of emerging technologies like AI and powerful computational models on internet infrastructure?

speaker

Audience member (Mariana)

explanation

This expands the scope of considering sustainability in internet infrastructure beyond just connectivity

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #41 Big Techs and Journalism: Disputes and Regulatory Models

WS #41 Big Techs and Journalism: Disputes and Regulatory Models

Session at a Glance

Summary

This discussion focused on the complex relationship between big tech companies, digital platforms, and journalism in the modern media landscape. Participants explored various challenges and potential solutions regarding the sustainability of journalism in the digital era.

Key issues discussed included the shift of advertising revenue from traditional media to digital platforms, the impact of AI on content creation and distribution, and the need for fair compensation for journalistic content used by platforms. The speakers debated different regulatory approaches, such as Australia’s news bargaining code and proposals for public sector funds financed by digital platforms.

There was significant discussion about the difficulties in defining “media” and “journalism” in the current information ecosystem, which complicates regulatory efforts. The impact of AI on journalism was a major concern, with participants noting both threats to copyright and potential benefits for content creation.

The speakers disagreed on the role of government in regulating the relationship between tech companies and media. Some argued for stronger regulation to protect journalism, while others cautioned against government involvement, fearing potential threats to free speech.

The discussion also touched on the need for media companies to adapt their business models and build direct relationships with audiences. Participants emphasized the importance of maintaining journalistic ethics and quality in the face of technological disruption.

Overall, the conversation highlighted the complexity of balancing innovation, fair compensation, and the preservation of quality journalism in the digital age. The speakers agreed that ongoing global dialogue and collaboration are necessary to address these challenges effectively.

Keypoints

Major discussion points:

– The impact of digital platforms on journalism revenue and business models

– Regulatory approaches to compensate news organizations for content used by platforms

– Challenges of defining journalism and who should benefit from compensation schemes

– The emerging threat of AI systems using journalistic content without compensation

– The need for media organizations to adapt and innovate their own business models

The overall purpose of the discussion was to explore different perspectives on how to promote journalism sustainability in the digital era, examining regulatory models and alternatives for fair compensation from digital platforms.

The tone of the discussion was thoughtful and analytical, with participants offering nuanced views on complex issues. There was general agreement on the challenges facing journalism, but some disagreement on solutions, particularly around government involvement. The tone became more urgent when discussing AI, reflecting the rapidly evolving nature of that threat.

Speakers

– Bia Barbosa (Moderator)

– Journalist and member of the Brazilian Internet Steering Committee

– Iva Nenadic

– Researcher at the European University Institute Center for Media Pluralism and Media Freedom

– Studies media pluralism in the context of content curation, ranking, and moderation policies of online platforms

– Juliana Harsianti

– Journalist and researcher from Indonesia

– Works on the influence of digital technology in developing countries

– Nikhil Pahwa

– Indian journalist, digital rights activist, and founder of Media Nama

– Key commentator on Indian digital media, censorship, and internet regulation

Additional speakers:

– Eva Nenatic (likely the same person as Iva Nenadic, with name misspelled in transcript)

– Researcher at European University Institute Center for Media Pluralism and Media Freedom

Full session report

The Digital Media Landscape: Challenges and Opportunities for Journalism

This discussion, moderated by Bia Barbosa, a journalist and member of the Brazilian Internet Steering Committee, explored the complex relationship between big tech companies, digital platforms, and journalism in the modern media landscape. Participants from diverse backgrounds, including Iva Nenadic, Juliana Harsianti, and Nikhil Pahwa, examined various challenges and potential solutions regarding the sustainability of journalism in the digital era.

Impact of Digital Platforms on Journalism

The speakers unanimously agreed that digital platforms have significantly disrupted traditional media business models. Nikhil Pahwa, providing perspective from India, noted that platforms both benefit media by driving traffic and compete for advertising revenue. Juliana Harsianti pointed out that small media outlets can use platforms to reach audiences but face sustainability challenges, particularly in the Global South. Iva Nenadic emphasised the tremendous power platforms wield in shaping information systems with little accountability.

Regulatory Approaches and Compensation Models

The discussion revealed divergent views on regulatory approaches to platform-media relationships. Nikhil Pahwa criticised Australia’s news bargaining code, arguing it set a problematic precedent of paying for links. He cautioned against government involvement in media-platform relationships, citing risks to media independence. Instead, Pahwa advocated for regulation focusing on algorithmic accountability and transparency rather than mandating payments.

Iva Nenadic highlighted the Danish model of collective negotiation as a potential alternative. This approach involves media organizations collectively bargaining with platforms, potentially addressing the power imbalance between large tech companies and media outlets, especially smaller ones. Nenadic suggested that this model could be more effective than individual deals or government-mandated payments.

The speakers acknowledged the difficulty in defining “media” and “journalism” in the current information ecosystem, which complicates regulatory efforts.

Emerging Threats and Opportunities from AI

The impact of AI on journalism emerged as a major concern. Bia Barbosa raised copyright issues regarding AI systems using journalistic content to train models without compensation. Nikhil Pahwa warned about AI summaries potentially cannibalising traffic from news sites, disrupting traditional web traffic dynamics. However, he also noted the potential future use of synthetic data by AI models, which could reduce the need for journalistic content in training.

Juliana Harsianti highlighted the use of AI in content creation by journalists in Indonesia, raising ethical concerns about journalistic integrity and the future of the profession. The speakers agreed that AI’s rapid evolution presents both opportunities and threats to journalism, necessitating new regulatory frameworks and ethical guidelines.

Future of Journalism and Media Sustainability

Nikhil Pahwa argued that media organisations need to innovate and develop new business models rather than relying on subsidies or government intervention. He suggested that media companies should protect their rights through legal means when necessary.

Iva Nenadic stressed the importance of journalism demonstrating its value proposition to audiences, particularly in light of declining trust, especially among younger demographics. She emphasized the need for self-reflection within the journalism profession to address these issues and reconnect with younger audiences.

Juliana Harsianti highlighted the unique sustainability challenges faced by small and alternative media outlets in developing countries, where they often rely on donor funding. This underscored the need for diverse solutions that consider regional contexts and the specific needs of smaller media initiatives.

Unresolved Issues and Future Considerations

The discussion left several crucial issues unresolved, including:

1. Effectively regulating AI’s use of journalistic content without stifling innovation

2. Determining fair compensation models for platforms’ use of media content

3. Balancing the need for regulation with concerns about government involvement in media

4. Addressing declining trust in traditional journalism, especially among younger audiences

The speakers suggested potential areas for further exploration, such as:

1. Developing collective bargaining strategies for media coalitions

2. Creating public sector funds financed by digital platforms to support journalism

3. Establishing self-regulatory frameworks within the journalism industry to address ethical concerns around AI use

Conclusion

The discussion highlighted the complexity of balancing innovation, fair compensation, and the preservation of quality journalism in the digital age. While the speakers agreed on the challenges facing journalism, they offered diverse perspectives on potential solutions, particularly regarding government involvement and regulatory approaches.

Bia Barbosa’s closing remarks emphasized the need for balance between big companies, national media companies, and the public interest, suggesting a potential role for the state to play. The conversation underscored the need for ongoing global dialogue and collaboration to address these challenges effectively, considering regional differences and the diverse needs of media outlets of all sizes. As the digital landscape continues to evolve rapidly, ensuring a sustainable future for quality journalism remains a critical global challenge requiring innovative and flexible approaches.

Session Transcript

Bia Barbosa: Okay, thank you. Is that okay? Perfect. Yeah. So, good afternoon, everyone who is here in Saudi Arabia. My name is Bia Barbosa. I’m a journalist and member of the Brazilian Internet Steering Committee. Actually, I’m going to moderate this workshop, and thank you for everyone for coming in the place of Rafael Evangelista, who was supposed to be here but had problems getting into the country because of visa issue, but thank you, everybody, for being here. Thank you for the people that are with us here in the room as well. So welcome to the Big Techs and Journalism Disputes and Regulatory Models Workshop. The idea of today is to have an open debate on what are the alternatives to promote journalism sustainability in the digital era, and what can we learn from regulatory endeavors on the remuneration of journalism by digital platforms across different countries. In a brief introduction, I would like to share with you that the demand for a fair remuneration from digital platforms in favor of journalists or news companies is not new. It’s a tension that has deepened since the prominence of large information platforms and the rise of communication mediated by social media. The exponential growth of digital platforms transformed the digital advertising ecosystem. Their business models based on data collection and analysis for the purpose of targeting advertising has profoundly impacted contemporary journalism, and the systematic shift of revenue from journalism to digital platforms reshaped the landscape of media consumption, production, and distribution. These transformations not only alter the circulation of journalistic content, but also exacerbate power imbalances, potentially widening the gap between those with access to quality, reliable, and diverse information, and those without. This is particularly evident in crises such as those surrounding public health and political electoral communications. At the core of these concerns lies the question of how journalism is compensated by digital platforms, igniting a wave of regulatory proposals across many nations and mobilizing multiple stakeholders. Australia, notably, passed a pioneering legislation addressing this issue. In Canada, the approval of the Online News Act prompted META to remove news from their platforms. This decree has been issued in Indonesia, while South Africa is currently conducting an inquiry on digital platforms markets. In Brazil, from where I come from, since 2021, two proposals have been at the forefront of the debate, the determination in law of the obligation of digital platforms to negotiate with journalism companies and the approval of a public sector fund financed by digital platforms. Although these proposals do not necessarily contradict each other, the idea of a fund is defended as an alternative to the direct bargaining model and not as its complement by many actors. At the international level, regulatory initiatives have been the subject of years of negotiations involving not only the executive and legislative branch, but also the judiciary. In addition to the state actors, a myriad of other actors are taking part in the debates, digital platforms, media companies, researchers, journalists, civil society organizations and international bodies. Last year, the Content and Cultural Goods Chamber of the Brazilian Internet Steering Committee published a study entitled Remuneration of Journalism by Digital Platforms, in which we mapped out five controversies on the subject. The first one is who should benefit? In other words, what should be the scope of any legislation regarding remuneration of journalism by platforms? The trend in legislative proposals has been to create minimal criteria for designating potential beneficiaries, such as the number of employees or media turnover. However, these criteria have been criticized because they potentially exclude individuals or small businesses. For some, journalists themselves should be paid directly, and for others, this is unfeasible. The second controversy is who should pay? The journalism remuneration proposal The proposals we have mapped use a different terminology to define the actor responsible for this remuneration. Digital platforms in Australia, online content sharing services providers in the European Union, platforms and digital news intermediary companies like Canada. In Brazil, the bill on platforms regulation uses the terminology of social media providers, search engines, and instant message services. A third issue, pay for what? The understanding of what journalistic content is changed greatly. For example, in a report published by the Organization for Economic Cooperation and Development in 2021, it defines news as information or commentary on contemporary issues. Explicitly, excluding entertainment news. However, this is a narrow view that can be interpreted from some of the regulatory initiatives analyzed in our report. And in addition, an important part of content made available by media, in which generates high levels of engagement on social media platforms, refers to sports and entertainment. This controversy is also related to the content of voluntary agreements between platforms and journalism companies negotiated without the intermediation of a public authority. The guarantee of confidentiality of these commercial agreements prevents the evaluation of the criteria used to remunerate journalism and its impact. Therefore, there is concern that the use of quantitative criteria, such as the number of publications, will serve as an incentive to reduce the quality of the content produced. The fourth controversy highlighted is related to the demand for more transparency in the work of the platforms, whether in relation to digital advertising revenue or the algorithms used in the content recommendation systems for users. So, remuneration based on what data? And finally, what should the role of the state be? To what extent should the state interfere in relations between journalistic content producers and digital platforms? The Australian Code left a wide margin for these actors to negotiate on their own. However, there is no consensus on whether this is the best model, even considering some specific countries like Brazil, where free negotiation between the parties can result in an even greater concentration of resources and power in a small number of players. The idea of a public sector funded, financed by digital platforms and managed in a participatory way is based on a more proactive and broader vision of the whole of the state. And in this case, decisions about the beneficiaries of the initiative would be part of the construction of public policies to support journalists. So, much to discuss about our workshop session will be divided into three parts. The first will consist of speakers exposing your views and policy experience. The second, the idea is to have a short debate among different perspectives raised by you, by the speakers, and the last one will be devoted to Q&A. I would very much like to talk to our colleagues here in the room and in the online room as well. So, I think we could, I’m going to present to you, not all of you right now, but one at the time that you are going to speak. I think that we could start with Eva Nenatic. Eva studies media pluralism in the context of content curation, ranking, and moderation policies of online platforms, democratic implications such policies may have in related regulatory interventions. At the European University Institute Center for Media Pluralism and Media Freedom. She has been involved in designating and implementation of the Media Pluralism Monitor. So, Eva, thank you very much for being with us. And it would be great if you could present your thoughts on information pluralism online. Thank you very much. You have eight minutes.

Iva Nenadic: Thank you very much for having me. I will be. I will try to stick to eight minutes. I will also try to be maybe a bit briefer so that we have more time for exchange. And indeed, I kind of apologize in advance because my view may be a bit more Eurocentric because this is the main area that we focus on being a center, research center on media pluralism at the European University Institute and running the media pluralism monitor in all your member states and in candidate countries. So candidates for EU membership. But of course we do regularly exchange with our colleagues and our partners in South America, in Australia, in the US and all over the globe to understand basically the focus of our work is on the health of the information system. And the way we understand media pluralism as a concept is perhaps a little bit different than this concept is understood in the US or in Australia or in other parts of the world because when we speak about media pluralism, we don’t speak just about the competition in the market. So market dimension of this, but we are speaking about wider enabling environment for journalism and for media, which are enablers of freedom of expression somehow. So we are looking at fundamental rights protection such as access to information, freedom of expression, both in regulatory framework, but also in practice, the role of relevant authorities, status and safety of journalists, including digital safety is an important aspect of media pluralism as well as social inclusiveness or representativeness of different social groups, not only in media content, but also in management structures and not only of media, but increasingly also of digital companies or big tech, however, whichever terminology we want to use. And then there is this element of political independence or political power. And so our work very much revolves around the concept of power. So the way we approach, we understand media pluralism and the way we regulate somehow in Europe to protect the media pluralism. is to somehow curb or limit opinion centralization or concentration of opinion forming power. And this is how we’ve been doing this for the media world that we had in the past. Of course, we are still not there when it comes to platforms, but I think it’s quite obvious and probably not just from this conversation that the opinion forming power has increasingly shifted from the media. If it’s even still with the media to online platforms or digital platforms or digital intermediaries. So we live in an information environment in which digital platforms largely excluded. So big technology companies largely excluded from liability and accountability actually do have power over shaping our information systems and do have power over the distribution of media and journalistic content that does have. So media, unlike digital platforms do have liability over the content they produce and they place, they publish. So what we are seeing somehow is a profound paradigm shift where, as I said, technology companies are becoming or have become in many instances, especially for certain demographics, the key infrastructures where people actually engage with news and the information that can affect and shape their political opinion. And so they have tremendous power, but very little responsibility in respect to that. And so, but because the focus of our today conversation is on the economic side, somehow the economic implications, I will focus a little bit more on that or this relationship between big tech and journalism in economic terms. But I think it’s really important to emphasize somehow that also in the economic terms, this rise in centrality of platforms has led to disintegration of news production, which is very costly, especially if you think of. and investigative journalism and quality reporting in general from distribution, which is kind of cheap. It’s easy and is cheap nowadays to distribute the content and then benefit or monetize on that. And it’s also disintegrated from advertising because the platforms have positions really as intermediaries between the media or journalism and their audiences and also between the media and advertisers. And we know that traditionally the business model of media or the business model was developed as two-sided market. So providing news to audiences or even charging them through forms of subscription or paying for newspapers and similar and then selling the attention of audiences to advertisers. And now both sides of this market or of this, both sides have been disrupted or controlled somehow, are controlled by the digital platforms or big tech. And in the multi-sided market of big tech companies, the media are just one component of this value chain. So I think this is also something important to keep in mind. And I think we probably, you opened with a relatively strong focus on online platforms, digital platforms, but I think what’s also important to introduce into this conversation is also the role, increasingly relevant role of generative AI companies who are extensively using media content to train their models and to provide, to generate outputs, very often separating the content from the source. So diluting the visibility of media brands, which has an implications again for the economic sustainability of the media. And again, we, in that environment, we as well have negotiations or at least attempts of negotiations or establishing sort of level playing field, which is very difficult to establish, right? Because of tremendous imbalances in power between the tech side and the media. media side. But I think this dimension is also very, very relevant, very important to look at. And two last points I want to make. One is about thinking about the power of big tech in relation to media. So they decide whether they want to carry media content or journalism at all. And we’ve seen, especially with these attempts of regulation, for example, you mentioned the news bargaining code in Australia. You mentioned the initiatives in Brazil, in India, in South Africa, in Canada, in the U.S., especially in California, which is a very interesting case of trying to establish frameworks for negotiation or fair remuneration that should go from big tech to the media. And this is not easy because, again, there is tremendous imbalance in power. And what we’ve seen with the Australian example that is the most advanced one is that now there is a backsliding somehow, because now when there is recently Australia published a revision of the effectiveness somehow of this framework for negotiations that suggests that it’s not as strong enough to kind of ensure sustainability of this approach, because we’ve seen with the major platforms that they’re withdrawing. They don’t want to renegotiate new deals. They don’t want to expand on these deals to include more local media, for example. So it again suggests that the power is still with platforms. The power is still with big tech. And so very often as a response to regulatory intervention, what they do is they either threaten or they just ban news. What we’ve been seeing from them throughout the years is that they are segregating the news in specific tabs, for example, in specific areas on their services, on the services they provide, so that eventually they can just switch it off or shut it off. So the kind of conversation we have in Europe, and the one maybe important point to make is that unlike Australia, that when. with the competition law in Europe, we focused a little bit more on copyright as a basis, as a ground for negotiations between the platforms and the media for fair enumeration. And I think this is also interesting in the conversation, why the conversation around generative AI and how to play this problem or how to deal with this problem in that area. And we’ve seen a lot of issues with this, right? Because first of all, these negotiations, as you already emphasised, are somehow opaque, so we don’t really know what has been negotiated. Who negotiates? In some cases, in some countries, we have individual big publishers negotiating first or negotiating separately, which has implications for media pluralism, because what the big ones negotiate sets somehow the benchmark for the other ones, and then if the big ones are negotiating and excluding the smaller ones, this really can have tremendous consequences for media pluralism or information pluralism more broadly. The big markets, of course, are much better positioned, or big languages are much better positioned to negotiate with big tech than the smaller ones. And the same applies for this tension between the publishers or the media companies and journalists, because there are, as we’ve seen from many examples, they’re not always aligned and they’re not always at the same side, so who should benefit indeed is a big question. The way we understand media and journalism is in a very, we define it in a very broad sense, trying to take into account that there is a plurality of relevant voices, voices of reference in the contemporary information sphere that should be considered at the equal level somehow as journalists, but of course this complicates the situation even further. And I don’t know if I have any minutes left or should I? Yeah, I have. Okay, good. So, basically, the main point I was trying to make is that what we are seeing, what we’ve learned somehow from these initiatives, and mostly focusing on Australia and the copyright directive in Europe, because these two, I think, have the longest experience, even though I mean, they’ve been around for a couple of years, but we can reflect a little bit and look at the effectiveness of these initiatives. I think there are a lot of shortcomings somehow that are surfacing now, that do show that we do not have sufficient instruments to deal with this enormous and even growing power of big tech, that the negotiation power is still on the side of platforms. So, we haven’t really managed to put the media at the same level to be able to negotiate equally. The problems are also on the media side. So, as I said, this fragmentation between the media companies, between media and journalists, between the big and small ones, between big markets, smaller markets, big languages, smaller languages. We do have good examples here. For example, in Denmark, they decided to form a coalition and to negotiate collectively with big tech. And they are really persistent on this. So, and they’re very clear about their conditions and setting their benchmarks high. I think the problem, another problem that we should consider in this conversation is the lack of clear methodology of what’s the value, like what is the value and who should be calculating the value or what is fair remuneration in this context. We have several examples or several cases where this value is calculated in a different way. So, it’s not clear. And of course, it’s not clear from these deals, because these deals, as we said already, are not transparent. And so, what we are seeing increasingly in the policy framework is the shift from these bargaining or negotiation frameworks to something that is a bit more direct, regulatory or policy intervention in this area. So, speaking increasingly about the need, for example, to tackle the issue of the fact that platforms do have this power to decide whether they even want to carry media content or not. In Europe, for example, we have European Media Freedom Act that introduces a precedent somehow by putting forward this principle that media content is not just any other content to be found on the online platform. So they have to pay due regard to this content. And in case they want to moderate this content, they need to follow a special procedure. And I think this speaks a lot in this direction of policy conversations that are suggesting that if these platforms have indeed became key infrastructure for our relationship with news and with media and with informative content more broadly, then maybe we should consider them as public utility. And maybe there should be some must-carry rules in order to make sure that the media and journalism content remains there so that they don’t have power just to remove it. Or we should think of complete alternatives so to break down these dependencies. In terms of bargaining or negotiating frameworks for fair enumeration, there’s been a shift somehow or an intention to shift this conversation, looking at the failure of these negotiation frameworks somehow, or at least their shortcomings, to something that is more direct intervention in terms of digital text or digital levy. But then this opens a new area of questions about how do you then allocate and distribute this money, and especially taking into account that not all the states have all the necessary checks and balances to make sure that these kind of processes are not abused. So I think I said a lot, so I’ll just stop here and look forward to the exchange.

Bia Barbosa: Thank you. Thank you so much, Ivan. And we’re going to for sure have time for this exchange. And you mentioned the impact for small journalistic initiatives. And I think that is a good way to chain with Juliana Razzianti. I don’t know if I pronounced correctly your surname. Because I would like very much to ask you to present your views on the impact of digital platforms on community development and the importance of. of journalism for these communities. So to introduce you, Juliana is a journalist and researcher from Indonesia. She has worked mostly about the influence of digital technology in developing countries, contributing, for example, to global voices and international online media. And I’m sure that from her perspective, they’re much in common from our perspective in Brazil as well. So I give the floor to Juliana. Thank you very much for being with us. I don’t know what time is there in Indonesia, but thank you for being with us.

Juliana Harsianti: Yeah, thank you. Can you hear me? Because, yeah, I’m sorry I cannot turn on the video because this is better for the sound connection. Thank you for the Brazilian internet, the CGBR, to invite me as the speaker to this important issue. Good afternoon. Good afternoon for everyone who is attending in Riyadh. This is almost 9 PM here in Jakarta. But yeah, this is OK to have some discussion with colleagues about the impact of big tech in journalism. As mentioned earlier in opening remarks, Indonesia has in early this year, Indonesia has been published some presidential decree about the regulation for the big tech and digital platform to sharing the revenue with the publisher because the government thinks that the presence of the digital platform in Indonesia has been disruption for the business model, for the media mainstream Indonesia. And it is still on discussion between the tech company and the association journalism and the government in Indonesia whether this decree can be implemented shortly or there’s some modification or some adjustment in future. But this evening I will talk about how the small media mostly take advantage on the digital platform like social digital platform and social media to promote the freedom of press, to spread the information or more variety information in Indonesia. I can give two examples from Indonesia. Magdalene and Project Multatuli both are the online media platform based in Indonesia. Magdalene is more about who focus on gender issue. Meanwhile the Project Multatuli is more focused on the in-depth journalism and highlighted some issue has been avoided by the mainstream media in Indonesia. Why they choose to take advantage of the online platform and the digital social media? Because they think it could reach more audience, they could get more… engagement from the readers but not from the business side because they try to avoid to have some Google ads on their platform. They tried organically to establish their website on Google search engine to keep their site still number one in the search engine. But like Eva said, the small medium media company and our community media has more advantage not to have the big revenue as the big media company. So they can do more freely to promote freedom of expression and multilingual website and can discuss more freely about the issue that has been avoided by the mainstream media and how they get managed the business run. Yes, they have some business model to be running. Most of them get the money from the donor and then from the subscriber, not the new subscription but mostly for the donation for the individual who has been support their platform. and keep the readers who want to get the more quality journalism and then alternative media in Indonesia. I think this is enough for my side, and then back to you.

Bia Barbosa: Thank you very much, Juliana. And for sure that there are other challenges that we will be able to exchange regarding the sustainability of small media initiatives. I think that from the Global South perspective, we still have some other challenge than the Global North has regarding it, because at least from the South America and the Latin America perspective, we face the problem of the concentration of media. In a very few countries, we have a public media that can more or less guarantee some pluralism in the media landscape in general. So I think that there are other challenges that besides the developed countries already have regarding the sustainability of journalism and that we are still facing the last century challenges regarding media pluralism, and then we face the news one regarding all the stress that the new form of production and distribution of content brings to us. So thank you very much for sharing your experience here in Indonesia, and I think that we can move forward with Nikhil Pawar. I would love to hear Nikhil on your studies on the revenue demands from big tech companies and linking them to the legal cases against AI. I think that is the good connection of what Eva brought us at the beginning, relating how the AI systems are using journalist content to train the models and specifically the narrative AI systems, but not only. And so thank you very much for being here with us. I’m going to introduce you, and please feel free to complete any information. So Nikhil is an Indian journalist, digital rights activist, and founder of Media Nama, a mobile and digital news portal. He has been a key commentator on stories and debates around Indian digital media companies, censorship, and intimidation. internet and mobile regulation in India, and of course, studying this demand from big tech companies regarding the journalistic revenue so thank you very much for being with us. You have 10 minutes.

Nikhil Pahwa: Thank you and thank you for inviting me for this very important discussion. I’m a journalist and I’ve been a media entrepreneur from India for about 16 years now, and I’ve been a journalist for 18. I’ve also been blogging for about 21 years. I’m a part of a few key media related committees in India that look at the impact of regulations in media, including the Media Regulation Committee of the Editors Guild of India, and I come at this from an internet perspective having built my entire career on an online platform. We are a small media company, we have about 15 people working at a media organisation. But I also still do believe that journalism is not the exclusive privilege of traditional media or formal journalists. Even today news breaks on social media, and frankly, journalism I see as an act, and therefore people who publish verified content even on social media are also doing journalism. So we can’t really look at things purely from a mainstream media lens, and you know even today there are online news channels and online podcasts that run as media businesses, online media businesses, and they’re just an alternative to traditional media. The primary challenge that media companies and especially traditional media companies face is the shift of advertising revenue from traditional media organisations which had restricted distribution. to digital platforms where now they face infinite competition because everyone can report, everyone can create content. And you know, traditional and big tech companies like Google and Facebook have built business models that rely heavily on data collection and targeted advertising, which has meant that they are competing as aggregators with the media companies on their platforms. But also let us not forget that media companies also compete with all users on the same platforms. So the real challenge for media is of discovery. And you know, but we also have to realise that for media businesses, and I run one, the benefit that these platforms create is that they send us traffic as well. For most media publications, a majority of their traffic comes from search and social media, and they’re the primary source of traffic for many media, many news companies today, including us. What’s also happening, you know, just to cover the complete situation, is that we are facing a new threat with AI summaries. What Google does on its search, especially because unlike traditional search, which used to direct traffic to us, AI summaries potentially cannibalise traffic, they don’t send us traffic anymore. And so Google isn’t just now an aggregator of links, but it is also turning out to become an answers engine. And that is a term which is used also by perplexity, which performs the same function. and similar rag models for AI, basically take facts from news companies and compile them into fresh articles that serve a user’s need. So in fact, a future threat for us and that we will see play out in the next 2 to 3 years is that apps like perplexity which use our content, our use of facts that we report will start cannibalising our traffic. And all media monetises the traffic that they have and they rely on building a relationship with users so that they read them on a regular basis. But really it is important to remember that if we do not get traffic we will not be sustainable. And so while most of this conversation has been focused on getting paid for linking out, I think that is a battle that should not be fought because we actually benefit from search engines and from social media platforms linking out to us. And if it becomes, if you start forcing them to pay and they choose not to link out in order to, which is what Facebook did in Canada, it will actually cost news companies significant revenue because audiences will not discover them. Australia’s news bargaining code as well I feel has set the wrong precedent because we benefit from traffic from social media. Linking out should not be mandatorily paid, it breaks a fundamental foundational principle of the internet where the internet is an interconnection of links, people go from link to link to link and discover new content, new innovation, new things to read. And so I think we should be very careful about forcing platforms to link out because that is a mutually beneficial relationship. The advertising issue is frankly a function of the media not building a direct relationship with its audience, like we have built a direct relationship. relationships with our audience and therefore losing out on monetisation to big tech platforms. Let us not forget platforms like the Guardian chose to sign up with Facebook for its instant articles effectively while they thought they were benefiting for the traffic on Facebook, they were also giving up audience to Facebook. So I think we need to be careful and we need to build our own direct relationships. But I want to talk a lot more about AI because I think that is where it becomes problematic. The tricky thing with AI is that facts are not under copyright and media companies, news reporters like us essentially report facts and there is copyright in how we write things but not copyright in what we write about because facts cannot be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts. So platforms like Perplexity actually take facts from us, piece it together into a news article, they take it from multiple news organisations and they rewrite our content in a manner to be honest which can be much more easy to read and they can also query the same news article on sites like Perplexity which means that a user gets all of their answers based on our reporting on other platforms. Now this is not copyright violation but it is plagiarism and unfortunately plagiarism is not illegal even though copyright violation is. Now most of the cases that are being run, some in the US, some in India, in the US brought out by the New York Times. in India by a news agency called ANI focus on the fact that our content is being used is being taken by AI and ingested by them to train their models and therefore the likelihood of them replicating our work is very high and that they have taken this content without a license. And I think this is an important one because there is not licensing, there is not compensation for using our work to train them and I am aware that many news organizations across the world have actually signed up with AI companies for revenue sharing arrangements. Now this is a very short term perspective and usually AI companies will do exactly what for example Google has done with its Google news initiative and its news showcase where they will tie up with big media companies and this will end up actually ensuring that smaller companies do not get any money. In case of AI that is also what is going to happen. I will give you a small example when we moved our website to a new server our website crashed because of the number of AI bots that were hitting our servers and they were taking our content because they because we moved to a new server they thought this is a new website and so this stealing of our work is I think something that we need to look at from legislation, from codes to address and there needs to be regulation around copyright and AI and the outcome of legal battles happening in the US with New York Times as well as India with ANI is going to set very important legal frameworks for regulators as well and no one wants to talk about copyright. wants to touch the copyright issue because there is a uneasy tension amongst countries that there is a geopolitical battle going on right now about who comes on top in the AI race and they realise that for large language models they need more and more written content and a large and written facts and a large repository of that lies with news organisations. So while today we are trying to fight battles related to linking out which I think is a battle that should not be fought because linking out like I said is a fundamental foundational principle of the internet. The battle that we need to fight and we need to fight it early is the battle to ensure that we get compensated for content being used by AI companies or they need to essentially remove our content from their databases. That is the battle that I see only in courts but not in case of legislators. And these legal frameworks are going to be very, very important to develop because we need to create incentives for reporters to report, for news organisations to publish because let us face the facts, what the content that AI companies generate is based on our work and so if we do not do more original work, if we do not get incentivised to create original work and media companies start dying effectively they will have nothing to build on top of. So I think this is the relationship in terms of revenue relationships that regulation needs to address and like I said multiple times I strongly feel that the idea of paying for links is flawed and what has happened in Canada and what has happened in Australia is the wrong approach. Media companies are companies. as well, they need to figure out mechanisms for monetization and they are moved from an environment of limited competition in traditional media to infinite competition in digital media. And they need to adapt to that change, not try and get pittance from big tech firms. They should be competing with big tech firms. Thank you.

Bia Barbosa: Thank you very much, Nahiri. And I think that you brought us a very challenged perspective regarding, because we didn’t manage so far to solve the challenge related to journalist content used by platforms, by the aggregators, by the news aggregators. And now you’re facing already the AI training systems using journalist content. I would like to take advantage to ask you something. Here in Brazil, there is a bill on artificial intelligence regulation that has just passed at the Senate. We still have the chamber of deputies to move forward and to have the approval of the bill, but it provides the copyright payment for journalist content, use it in trainings and in response for the AI system as well. Do you think that this could be even considering there is a copyright approach that it could be interesting for solving at least this kind of problem that you mentioned? I would like to hear from you a little bit since we are checking and all the perspectives that are on the table in different parts of the world to tackle this issue. No, I think that if it’s legislated, that there needs to be compensation for copyright, for usage of copyright content, that is the correct approach. It’s just that once you agree that there should be compensation, The question becomes who gets compensated and how much do they get compensated? And you know what is the frequency at which they get compensated? Do you for example get paid for an entire data dump being given or do you get compensated on the basis of how it is used? In this case how do you validate that your content is actually being used by AI? You know because even Europe is struggling with algorithmic accountability. And by the way on the linking out part I don’t think, well I have said that there shouldn’t be a revenue share mechanism there. I do believe that we need algorithmic accountability for both social media as well as search to ensure that you know there is no discrimination happening in terms of surfacing our content. And as a small media owner I don’t want someone else like them to benefit big media or traditional media at my expense. So the fairness principles also need to be taken into consideration in the same way that fairness needs to be taken into consideration in case of the law in Brazil. But the question you have to ask is who is media today? How do you identify that this organisation, that you are actually supporting journalism? Because like I said at the beginning journalism is not the exclusive privilege of just journalists today, right? I am a blogger who started a media company. So I understand that bloggers also make money from advertising and to that extent they don’t get compensated. So why should I be as a blogger different from a media company? I am also running my own venture, right? So we are seeing an increase. infinite ability for reporting today because anyone can report. And in that scenario, who gets compensated, who does not becomes even trickier. Who is, if you are scraping a media publication’s blog, I mean sorry a media publication, shouldn’t a blogger also get compensated if it is being scraped for AI is a question. Why or why not? So these are not easy answers. I do not even know if there are answers to some of these questions. But when you are looking at defining laws, you have to create that differentiation. You have to break it up into who benefits and who does not benefit from that regulation. If you look at most podcasters, they are doing opinion journalism in a sense. They are carrying opinions, they are conducting interviews. Would you treat them as journalists under this law as well? So their transcripts, if they are being aggregated by AI, should they be compensated for that as well? Where do you draw the line? And that is the problem with laws, you do not know, it is very tricky to draw the lines in these cases. Yeah, and besides the law, I think that where you, in countries where you do not have a democratic regulator to analyze how these kinds of laws are being implemented, it gives us an even more challenging way to deal with. I do not know if Eva or Juliana want to comment on that or any other aspect. Eva, I would like to ask you if you could comment as well, besides anything else that you wanted to bring us, to ask you to tell us a little bit about this coalition that you mentioned in Denmark, that the media established to negotiate, to collectively negotiate with the platforms, with digital platforms, because one of the issues that we had here in Brazil as well, in the platforms regulation bill, that would if it was approved, now it’s on the Chamber of Deputies, was to compensate, it was not based on copyright, but it was based on content use of journalist content, how to negotiate for this, how it would be possible for the small initiatives to do that. There is already some digital journalism association in Brazil that try to represent the most part of these small initiatives, but they don’t manage to represent all of them, and how this coalition is working on Denmark that you mentioned, I felt that would be interesting to go a little bit deeper, but if you want to dive on this AI topic as well, please feel free.

Iva Nenadic: Thank you. Yeah, I’ll start with the last point. I think Nihil said many super interesting and relevant things. I want to stay for a second with this last point of the complexity we have to define media and journalism today, and this is indeed one of the key obstacles of all the, not only regulatory attempts, but also soft policy measures that we want to implement in this area, because this is the, I mean, it’s the first step, it’s the foundation. Who do we consider as a journalist? Who should benefit from these frameworks and who shouldn’t? How far can we stretch this? We’ve been doing a lot of work within the EU, but also Council of Europe that covers much more countries in Europe, and the Council of Europe has put forward some recommendations on how to define media and journalism in this new information world or information sphere we live in. And it takes a very broad approach, right, because it’s the freedom of expression that is at stake, so it’s one of the key principles. somehow that we nurture in Europe, the fact that the profession should be open and inclusive. And so if this is the principle, how do we solve these practical obstacles? Because we do see a lot of paradoxes of the information systems nowadays, right? The more open the debate somehow is, the more demagoguery, the more misinformation we have. So we have, in a way, we have plurality of voices in the news and information ecosystem, but not all of these voices are actually benefiting our democratic needs, right? Because many of these voices are actually misleading or extremely biased or not professional, not respecting ethical and professional principles. And so also creating a lot of disorder in the information system that confuses people, distorts trust, and has a lot of negative implications for our democratic systems. I can give one example that I’m not saying is a good solution, but maybe is a good starting point to look at on how to solve this problem. And this is something that has been heavily discussed within the negotiations around European Media Freedom Act that does provide this special treatment to media service providers, including journalists, in content moderation by major online platforms. So very large online platforms. We define them as those that have more than 10% of EU population as regular users. So around 45 million of people are using them on a regular basis, monthly users. And so in listing the criteria on, first of all, the law provides a definition which is very broad about media service providers, but listing the criteria on who are the media or journalists that can or should benefit from this special treatment, there is, for the first time in EU law, we have a mention of self-regulation. And we have an explicit reference to the respect of professional standards. So the law, and now I don’t recall exactly the text, but it says that those media who comply to national laws and regulation, but also comply to widely recognized self-regulatory or core regulatory frameworks are entitled to benefit from this. And of course, this can also be misused abuse. You can form an association of journalists that promotes wrong standards and claim that this is widely acknowledged framework if, I don’t know, they have a certain number of media within their umbrella. But I think there is something in that. I think we need to find a way to revive somehow self-regulation, respect of professional standards and ethical principles for different voices in the information sphere. And we can start from traditional journalistic principles, but these, of course, they can also evolve for the new needs. And another thing I think that is useful for this kind of conversation from that example is the transparency of the media who benefit from this. So this was, we were battling heavily somehow to have this clause explicitly mentioned in the legal text. It’s the requirement that the media who benefit, who self-declare as media, are transparent that the list, this list is easily accessible for everyone to read, so to civil society and to academia to make sure that bad actors are not misusing or abusing this legal provision. So I think there is something to look into there. On generative AI, I think this is a very relevant conversation. And again, I would agree with Mikkel that this is a new battlefield somehow. We haven’t resolved the old one. We haven’t resolved the old risks to media pluralism. or the political influences and so on, and even safety issues to journalists. And we’ve moved to the area of digital platforms. So these two battles were fought in parallel. And now we have also generative AI that is profoundly disrupting the information sphere. And I think the biggest change that is happening with generative AI is that we are moving from fragmentation of the public sphere that we had with digital platforms to what we call an audience of one. So this is extreme personalization of interaction between an individual and the content that this individual is exposed to and is generated by these models, these statistical models and systems that we don’t really know how they operate because of course there is a lack of transparency, there is a lack of accountability. We are not really sure what kind of data are they trained. There are a lot of issues with the data that they’re trained off in terms of biases, lack of representativeness and so on. We are seeing, for example, cases such as the Iceland. So the Iceland as a state strategically decided that it’s important for them and for the AI future in which we are entering for the language and their culture to be represented. So they willingly gave all they have in the digital data world for free to open AI, just to be represented in those models because they saw this as a priority. And then on the other hand, unlike the New York Times case where the New York Times is suing OpenAI for the breach of copyright because they use their content without license or without agreement, what we’re seeing in Europe is that the publisher, especially the major ones, such as Le Monde, Axel Springer, El Pais in Spain and similar are making deals with these companies. Deals that are opaque, so we don’t know what these deals are, but for example, the CEO of Le Monde said that it’s a game changing deal for them as one publisher, one media company. But this is probably not the best way forward because it’s fragmenting and weakening the position of publishers and weakening even further position of smaller publishers and journalists and so on. So I think in this context, the Danish model is a very interesting one because they started from, I think it’s a trade union, but I would need to double check this, whether it’s a professional association or a trade union, but it was an existing organization of journalists in the country who decided that the best approach is to go for collective negotiations with Big Tech because this will make them stronger. And they also decided to use all the legal instruments and regulatory frameworks that are in place in Europe to make their position stronger, so to ally somehow with the political power in the country to back them in this fight against Big Tech giants. And we think, of course, this battle is ongoing, there are back and forth. So sometimes they manage to progress and then there is a backlash from Big Tech. So this is a very early stage, very fresh, but it’s, I think, very interesting and relevant case to observe, to see how things can or should be done. Because I do believe that one of the lessons learned from the existing negotiation frameworks was that this fragmentation doesn’t really serve journalists and media. So a collective approach is probably a better one, and we are seeing much more happening on that end. So, you know, news media organizations coming together and finally starting to understand that they are stronger if they do this together. Yeah. Yeah.

Bia Barbosa: Thank you, Eva. And just for the record, I would like to mention that we, as the Brazilian Internet Steering Committee, we tried to invite Google and Meta representatives for this conversation here, but we didn’t manage to convince them to come, which usually happens in different occasions. So I see that Niki and Juliana have raised their hands. I’d just like to check if there’s anyone online asking any questions or not. So, Niki, do you mind if I give the floor to Juliana before?

Nikhil Pahwa: Of course. Please go ahead. I’ve said quite a bit of it.

Juliana Harsianti: Okay. I think our discussion has moved from the digital platform to AI, which has become the major our concern on the journalism in Indonesia. In Indonesia, the generative AI, especially large language model, not only threatened as the copyright, as the Nikhil mentioned and the information, but also threatened the work of journalism itself because the journalists start to generate the news by using the chat GPT, for example, or another large language model, and then they just do some edit for the news and then they publish it on their news sites. This is the problem, not the problem, this is the procedure, this is still on debate on the people who media company and association journalism, because they still think this is good or ethical to have generated the news or publish, or they can use the large language model generative AI to just find the news for the sourcing for the news, and then they have to writing by themselves, and then publish to the as the news article on the media. The problem with the regulation is, yes, I think we need the regulation by the state or the government, but the problem with regulation is still need time to discuss on to produce the regulation by the government. Meanwhile, the the technology is running fast. When the government has the published the regulation on generative AI, maybe we already has the chat GPT for the news, for the news area, which is the has ability more than chat GPT in we know for the moment. Well, what we think we should be done is the association which is not only in journalism, but also in creative media. So the journalist association and then creative people association has joined force to discuss which is that have, they will create the ground regulation, not to as the rule for how to do and how to not to be done by the generative AI for their work. I think this is more based more than ethical than the regulation. And for the moment, they think this is enough, but I think the we need to the more stronger regulation has the law enforcement to overcome the impact of generative AI in journalism and then creative work. So back to you.

Bia Barbosa: Thank you very much, Juliana. Nithya, please.

Nikhil Pahwa: Thanks, I’ll just respond to one thing that Juliana said. While we want strong regulation of AI, I think it’s going to be very difficult to get because geopolitically what’s happening. is that the EU is being looked at too strong a regulatory player and then countries are afraid that they will lose out on innovation and on the AI battle. So, at least in India from what I can see, there is a lot of pressure to not regulate AI. This is what the opposition in the Brazilian parliament as well. If you strong regulate, Absolutely. The other thing to look at is that instead of, just responding to Eva, I think one way of treating, ensuring that media owners get enough compensation is to not get compensation only for media owners. If anybody’s copyrighted content, whether it’s musicians or it’s authors or it’s media owners like us, if our work has been used for training models and we should get compensated. I had a conversation with a lawyer a few months ago who said that AI ingesting our content is like any person reading it because when they are giving an output, it’s not the exact same thing. It’s their understanding of our content. I would actually say that the power law applies over here. The ability of AI to ingest vast amount of our content from across the globe is far greater and so therefore there needs to be protection for creators and that creator could be of any kind, media, movies, books, anything. I would also say that there are other mechanisms where AI does need to be regulated like there has to be regulation for data protection. Eva mentioned bias and I think bias is the trickiest one to regulate because it’s about how one sees the world and perhaps there is plurality of AI systems that needs to be regulated. to come in, in order to ensure that representation is of different kinds, just like bias exists in society. On the New York Times case, I actually, I will be surprised if there is a verdict because we should not forget that New York Times filed a case against OpenAI after negotiations for compensation failed. And I would be surprised if OpenAI does not find a way of compensating New York Times and settling out of court because they would not want a verdict because their content has been ingested by OpenAI. There is one additional challenge that comes in, which is that this could be a systematic usage for research purposes. So AI is trying to position ingesting our content as a mechanism for research. And there can be exceptions in some countries to copyright for research purposes. So this is another challenge that I think that they are faced with. But a fourth thing that is emerging now, over a period of time that I am seeing and I talk to a lot of AI founders, is that the usage of synthetic data, which is data generated by AI itself, is also coming into the mix to the, wherein the future content may not be needed for large language models. Because they are already trained on existing content. In that case, if it is a compensation that we are paid for future uses as well, that may no longer exist. Because let us face it, these are language models. They are not necessarily fact models. Anyone who relies on AI for fact is probably going to get something or the other wrong and it is going to become problematic. So I still feel that media. does have an opportunity in its factual accuracy going into the future, where AI will always fail because its outputs are probabilistic in nature. I know I’m not answering many things because this is still uncharted territory, this is

Bia Barbosa: still evolving as we speak. But we need to take all of these factors into account. Thank you. Of course, and I think that there’s another topic that we didn’t mention today here is that I think that for the journalistic community, it’s interesting to have journalist content training somehow AI systems, otherwise the results of these AI systems are going to bring us are not at least our information that we cannot trust at the end. So it’s important to have journalist content, I think, that being used by AI systems, but I think that the way it’s going to be used in a fair way, in a compensated way, or dealing with copyright issues, but I think that for us, who support the integrity of information online, it’s important to have at least some journalist content being considered as the training of these systems. I see that Eva has raised her hand, we’re just approaching to the final of our session. I would like to ask you to, so I’m going to give you the floor once for each one of you and I just asking you to bring your final comments to this topic. Thank you again for being with us. So we can start with you, Eva, thank you.

Iva Nenadic: Thank you very much. I think it’s probably just the beginning of conversation, but it’s excellent to have this conversation at such a global scale and exchange because I think this is crucial to move us forward to do more of exchanges. like this. I just wanted to maybe, I won’t conclude on anything because it’s very difficult to give final remarks on any of this because it’s all open questions somehow, but I would like to put one more consideration forward. And that’s the fact that we haven’t really seen. So what we see from a lot of surveys is that the trust in journalism is declining. And for example, the latest Reuters news report is suggesting that people see journalists as drivers of polarization. So why is this the case has not been reflected enough within the profession itself. And of course there are multiple reasons for this. And there is also like very strong smear and negative campaigns by politicians against journalists who of course want to disregard or undermine the credibility of the profession because then it works better for them. But I think what we are not seeing sufficiently is this sort of self-reflection. So where have we failed as a profession, especially in this aspect of like reconnecting with youth, with young audiences, because clearly there is a gap there. So young people are departing from the media in traditional sense. They’re departing from journalism in traditional sense and journalists somehow are ignoring this fact. We don’t see enough self-reflection on that side. And then there is also this question of creating value for audiences. I don’t think that media and journalism in traditional sense is investing enough in this. So there is this obsession or a demand somehow that journalism and media should be treated as a public good. And I do strongly support this idea that media and journalism when professional, when ethical are definitely public good and should also be supported by public subsidies in a way that is transparent and fair and contributes media pluralism. But at the same time, there has to be a bit more self-reflection and incentives or initiatives coming from within the profession. And at the moment, what we’re seeing is a lot of complaints, like we are captured by platforms, we are being destroyed by platforms, we need help. But what is actually the value that journalism has to offer to the people has been pushed aside or forgotten a little bit. So I think this would probably be the best case for journalism to kind of revive or remind us of what this value actually is and how can they create value with these new tools and technologies that are on disposal to everyone, including to media and journalism. I think that would make a stronger case for why should people go back to journalism and media and support them more.

Juliana Harsianti: Thank you very much, Eva. Juliana, please, besides it gets 10 o’clock in Jakarta. Oh, yeah, thank you. I think I agree with what Eva said, we cannot make the conclusion for our discussion because this kind of discussion is still to be continued in the future. And then it needs to be a regular conversation, either in a developed country or a developing country in the Global South or Global North. Because it is important for journalists to create the new form in this digital platform, how to deal with the big tech, how to deal with the generative AI and how to create the, to keep the ethic within the journalist in the middle of the influence of digital platform. and the generative AI who has been challenging their work and then the business model of the media company. Yeah, the conversation will be, yeah, the conversation will be impacted to the policy, either to the nation state policy or to within the association, either association journalism and the media company in regional or the nation states. So it will be have the better environment for journalism who keep creating and then keep survive in this digital era.

Bia Barbosa: Thank you. Thank you very much, Juliana. Nikhil, please, your final remarks.

Nikhil Pahwa: Thank you and thank you for having me here. It’s been a great conversation. I’m both a journalist and an entrepreneur and I am a capitalist in how I work, but I do that ethically. I do feel that as media, we have to find our own business models rather than relying on subsidies and government support and anything from the government, to be honest, because anytime, and I feel this strongly, the government comes into a tripartite relationship with government, media and big tech, two things happen. Governments use the funds and it may be different in Europe, but in the global South, governments use funds as a mechanism of influencing the media. And secondly, if the media pushes for them to regulate big tech, then government creates regulations over big tech and uses that as a mechanism to regulate. free speech. So to be honest in this relationship I do not want the government in there because it has an impact on democracy, it has an impact on media freedom whether directly or indirectly whenever you have governments involved. I would rather that we figure out our own business models and if there has to be regulation it has to be applicable across society, not specific to the media, I do not feel we need special treatment and I do not feel that we should have special treatment, we have to adapt as times change, we have to adapt from when we move from traditional business models to online business models, from online to AI but at the same time if someone is stealing our content we need to go to court to protect our rights in a sense. So I strongly believe that I do not want government in the picture and we do not need protection, we need to fight our own battles and we need to innovate on our own. For far too long we have allowed all the innovation to centre around big techs when we have had the same opportunity to build audience relationships and I do not think this is, expecting regulation and laws and policies to support us is going to solve the problem for us. I know this is antithetical to what this conversation has been about but that is the way how I run my media business. Thank you.

Bia Barbosa: And of course one thing that is government and another thing is the state role that we brought at the beginning in our conversation, that is one of the controversies that we had mapped in this report that we published here by the Brazilian Internet Steering Committee. I totally agree with the risk that we have when governments regulate freedom of expression issues or regulate technology that is related to freedom of expression but I also agree that we have to search for some kind of balance between big companies and in countries like mine in Brazil where you have the big national companies, media national companies and the global big techs that the public gets in the middle, the citizens get in the middle and the state has a role to play as well to bring at least more balance to the conversation but of course it is not only governments that can bring this balance, we have the judiciary, we have independent regulatory bodies so there are other alternatives that I think that we have to put on the table to try to find some solutions that respect our specificities in each of the countries that we are discussing this kind of problem but also in a global perspective because we are dealing with global companies and maybe some achievements that we may had in some countries may help us to deal with that. in other realities in from the global South perspective, I think that we can learn a lot from other countries that are tackling this problem. So once again, thank you very much for your time, your insightful thoughts and, and for spending some time with us here at the IGF. To start this conversation, as you mentioned, is it’s only the beginning. And I, from the Brazilian Internet Steering Committee perspective, I would like to thank you very much, and to, to make us available for any kind of further exchange that we might have. And have you all who everybody’s listening or here and for those who are here with us. A good evening. Thank you very much. Bye. Transcribed by https://otter.ai Transcribed by https://otter.ai

B

Bia Barbosa

Speech speed

137 words per minute

Speech length

3344 words

Speech time

1460 seconds

Platforms have disrupted traditional media business models

Explanation

Digital platforms have transformed the digital advertising ecosystem, impacting contemporary journalism. This has led to a shift in revenue from journalism to digital platforms, reshaping media consumption, production, and distribution.

Evidence

The exponential growth of digital platforms and their business models based on data collection and targeted advertising

Major Discussion Point

Impact of digital platforms on journalism

Agreed with

Nikhil Pahwa

Iva Nenadic

Agreed on

Digital platforms have disrupted traditional media business models

AI systems using journalistic content to train models raises copyright concerns

Explanation

The use of journalistic content to train AI models without compensation raises copyright issues. This practice is being challenged through legal cases in various countries.

Evidence

Legal cases against AI companies by news organizations like the New York Times in the US and ANI in India

Major Discussion Point

Challenges posed by AI to journalism

Maintaining journalistic ethics and quality is crucial amid technological disruption

Explanation

Barbosa emphasizes the importance of maintaining journalistic ethics and quality in the face of technological disruptions. This is crucial for ensuring the integrity of information online.

Major Discussion Point

Future of journalism and media sustainability

N

Nikhil Pahwa

Speech speed

140 words per minute

Speech length

2726 words

Speech time

1168 seconds

Platforms benefit media by driving traffic, but also compete for advertising

Explanation

Digital platforms like Google and Facebook send traffic to media websites, which is beneficial. However, they also compete with media companies for advertising revenue on their platforms.

Evidence

For most media publications, a majority of their traffic comes from search and social media

Major Discussion Point

Impact of digital platforms on journalism

Agreed with

Bia Barbosa

Iva Nenadic

Agreed on

Digital platforms have disrupted traditional media business models

Australia’s news bargaining code set problematic precedent of paying for links

Explanation

Pahwa argues that forcing platforms to pay for linking out to news content is flawed. He believes linking is a fundamental principle of the internet and mutually beneficial for both platforms and media.

Evidence

The example of Facebook’s response to Canada’s Online News Act, where they removed news from their platform

Major Discussion Point

Regulatory approaches to platform-media relationships

Differed with

Iva Nenadic

Differed on

Approach to platform remuneration

Regulation should focus on algorithmic accountability and transparency, not mandating payments

Explanation

Instead of forcing platforms to pay for linking, Pahwa suggests focusing on algorithmic accountability. This would ensure fairness in how content is surfaced on platforms without discriminating against smaller media outlets.

Major Discussion Point

Regulatory approaches to platform-media relationships

Government involvement in media-platform relationships risks compromising media independence

Explanation

Pahwa expresses concern about government involvement in regulating relationships between media and platforms. He argues this could lead to governments using funds to influence media or using regulations to control free speech.

Evidence

Examples from the Global South where governments use funds to influence media

Major Discussion Point

Regulatory approaches to platform-media relationships

Differed with

Iva Nenadic

Differed on

Role of government regulation

AI summaries threaten to cannibalize traffic from news sites

Explanation

AI-generated summaries, such as those provided by Google’s search results, potentially reduce traffic to news websites. This is because users can get information without clicking through to the original source.

Evidence

Examples of AI tools like Perplexity that compile facts from news sources into fresh articles

Major Discussion Point

Challenges posed by AI to journalism

Agreed with

Juliana Harsianti

Iva Nenadic

Agreed on

AI poses new challenges to journalism

Media need to innovate and develop new business models rather than rely on subsidies

Explanation

Pahwa argues that media companies should focus on developing innovative business models instead of relying on government subsidies or protection. He believes this approach is necessary for maintaining independence and adapting to changing times.

Evidence

His personal experience as a media entrepreneur running a business ethically without relying on government support

Major Discussion Point

Future of journalism and media sustainability

J

Juliana Harsianti

Speech speed

99 words per minute

Speech length

1063 words

Speech time

640 seconds

Small media can use platforms to reach audiences, but face sustainability challenges

Explanation

Small media outlets in Indonesia use digital platforms to promote freedom of press and reach wider audiences. However, they struggle with sustainability as they avoid relying on advertising revenue from platforms.

Evidence

Examples of Magdalene and Project Multatuli, two online media platforms in Indonesia focusing on gender issues and in-depth journalism respectively

Major Discussion Point

Impact of digital platforms on journalism

Journalists using AI to generate content raises ethical issues

Explanation

In Indonesia, some journalists are using AI tools like ChatGPT to generate news content, which they then edit and publish. This practice raises ethical concerns within the journalism community.

Evidence

Ongoing debate in Indonesia about the ethics of using AI-generated content in news production

Major Discussion Point

Challenges posed by AI to journalism

Agreed with

Nikhil Pahwa

Iva Nenadic

Agreed on

AI poses new challenges to journalism

Small and alternative media face unique sustainability challenges

Explanation

Small and alternative media outlets in developing countries face distinct challenges in maintaining sustainability. They often rely on donor funding and individual donations rather than traditional advertising models.

Evidence

Examples of business models used by small media outlets in Indonesia, such as relying on donations and avoiding Google ads

Major Discussion Point

Future of journalism and media sustainability

I

Iva Nenadic

Speech speed

158 words per minute

Speech length

4068 words

Speech time

1541 seconds

Platforms have tremendous power over shaping information systems with little accountability

Explanation

Digital platforms have become key infrastructures where people engage with news and information that shape political opinions. However, they have little responsibility or accountability for this power.

Evidence

The shift of opinion-forming power from traditional media to online platforms

Major Discussion Point

Impact of digital platforms on journalism

Agreed with

Bia Barbosa

Nikhil Pahwa

Agreed on

Digital platforms have disrupted traditional media business models

Collective bargaining by media coalitions may be more effective than individual deals

Explanation

Nenadic suggests that media organizations coming together for collective negotiations with big tech companies might be more effective. This approach could strengthen the position of publishers, especially smaller ones.

Evidence

Example of a coalition in Denmark where media organizations are collectively negotiating with digital platforms

Major Discussion Point

Regulatory approaches to platform-media relationships

Differed with

Nikhil Pahwa

Differed on

Approach to platform remuneration

AI’s impact on journalism requires new regulatory frameworks

Explanation

The rise of generative AI is profoundly disrupting the information sphere, moving from fragmentation to extreme personalization. This shift requires new regulatory approaches to address issues of transparency, accountability, and bias in AI systems.

Evidence

Examples of AI companies making opaque deals with major publishers, potentially weakening the position of smaller publishers and journalists

Major Discussion Point

Challenges posed by AI to journalism

Agreed with

Nikhil Pahwa

Juliana Harsianti

Agreed on

AI poses new challenges to journalism

Journalism must demonstrate its value proposition to audiences

Explanation

Nenadic argues that journalism needs to reflect on its role and demonstrate its value to audiences, especially younger ones. This self-reflection is crucial for reconnecting with audiences and justifying support for journalism as a public good.

Evidence

Declining trust in journalism and perception of journalists as drivers of polarization, as reported in the Reuters news report

Major Discussion Point

Future of journalism and media sustainability

Agreements

Agreement Points

Digital platforms have disrupted traditional media business models

Bia Barbosa

Nikhil Pahwa

Iva Nenadic

Platforms have disrupted traditional media business models

Platforms benefit media by driving traffic, but also compete for advertising

Platforms have tremendous power over shaping information systems with little accountability

All speakers agree that digital platforms have significantly impacted traditional media business models, reshaping the landscape of media consumption, production, and distribution.

AI poses new challenges to journalism

Nikhil Pahwa

Juliana Harsianti

Iva Nenadic

AI summaries threaten to cannibalize traffic from news sites

Journalists using AI to generate content raises ethical issues

AI’s impact on journalism requires new regulatory frameworks

The speakers agree that AI technologies, including generative AI and AI-powered summaries, present new challenges to journalism, ranging from ethical concerns to potential traffic loss and the need for new regulatory approaches.

Similar Viewpoints

Both speakers suggest alternative approaches to regulating platform-media relationships, focusing on transparency and collective action rather than mandated payments.

Nikhil Pahwa

Iva Nenadic

Regulation should focus on algorithmic accountability and transparency, not mandating payments

Collective bargaining by media coalitions may be more effective than individual deals

Both speakers emphasize the need for journalism to adapt and demonstrate its value in the changing media landscape, particularly for smaller and alternative media outlets.

Juliana Harsianti

Iva Nenadic

Small and alternative media face unique sustainability challenges

Journalism must demonstrate its value proposition to audiences

Unexpected Consensus

Importance of maintaining journalistic ethics and quality

Bia Barbosa

Iva Nenadic

Juliana Harsianti

Maintaining journalistic ethics and quality is crucial amid technological disruption

Journalism must demonstrate its value proposition to audiences

Journalists using AI to generate content raises ethical issues

Despite differing views on regulation and business models, there was an unexpected consensus on the importance of maintaining journalistic ethics and quality in the face of technological disruptions. This agreement spans across different regional perspectives and approaches to media sustainability.

Overall Assessment

Summary

The main areas of agreement include the disruptive impact of digital platforms on traditional media business models, the challenges posed by AI to journalism, and the importance of maintaining journalistic ethics and quality. There was also some consensus on the need for alternative approaches to regulating platform-media relationships and the importance of journalism demonstrating its value to audiences.

Consensus level

The level of consensus among the speakers was moderate. While there was agreement on the broad challenges facing journalism in the digital age, there were divergent views on specific regulatory approaches and the role of government in addressing these challenges. This implies that while there is a shared understanding of the problems, finding universally accepted solutions remains complex and context-dependent.

Differences

Different Viewpoints

Role of government regulation

Nikhil Pahwa

Iva Nenadic

Government involvement in media-platform relationships risks compromising media independence

Collective bargaining by media coalitions may be more effective than individual deals

Pahwa argues against government involvement in regulating media-platform relationships, citing risks to media independence. Nenadic, however, suggests that collective bargaining supported by regulatory frameworks could be beneficial.

Approach to platform remuneration

Nikhil Pahwa

Iva Nenadic

Australia’s news bargaining code set problematic precedent of paying for links

Collective bargaining by media coalitions may be more effective than individual deals

Pahwa criticizes Australia’s news bargaining code as setting a problematic precedent for paying for links, while Nenadic suggests collective bargaining as a potentially effective approach for fair remuneration.

Unexpected Differences

Sustainability strategies for media

Nikhil Pahwa

Juliana Harsianti

Media need to innovate and develop new business models rather than rely on subsidies

Small and alternative media face unique sustainability challenges

While both discuss media sustainability, Pahwa unexpectedly argues against relying on subsidies, emphasizing innovation, while Harsianti highlights the unique challenges faced by small media outlets in developing countries that often rely on donor funding.

Overall Assessment

summary

The main areas of disagreement revolve around the role of government regulation, approaches to platform remuneration, and strategies for media sustainability.

difference_level

The level of disagreement is moderate, with speakers generally acknowledging similar challenges but proposing different solutions. This reflects the complexity of balancing media independence, economic sustainability, and regulatory approaches in the rapidly evolving digital media landscape.

Partial Agreements

Partial Agreements

Both Pahwa and Nenadic agree on the need for regulation addressing algorithmic accountability and transparency. However, they differ in their approach, with Pahwa focusing on platforms and Nenadic emphasizing the need for new frameworks to address AI’s impact.

Nikhil Pahwa

Iva Nenadic

Regulation should focus on algorithmic accountability and transparency, not mandating payments

AI’s impact on journalism requires new regulatory frameworks

Similar Viewpoints

Both speakers suggest alternative approaches to regulating platform-media relationships, focusing on transparency and collective action rather than mandated payments.

Nikhil Pahwa

Iva Nenadic

Regulation should focus on algorithmic accountability and transparency, not mandating payments

Collective bargaining by media coalitions may be more effective than individual deals

Both speakers emphasize the need for journalism to adapt and demonstrate its value in the changing media landscape, particularly for smaller and alternative media outlets.

Juliana Harsianti

Iva Nenadic

Small and alternative media face unique sustainability challenges

Journalism must demonstrate its value proposition to audiences

Takeaways

Key Takeaways

Digital platforms have significantly disrupted traditional media business models and journalism

There are differing views on regulatory approaches to platform-media relationships, with some favoring government intervention and others opposing it

AI systems pose new challenges for journalism, including copyright concerns and potential cannibalization of traffic

The future sustainability of journalism requires innovation in business models and demonstrating value to audiences

Small and alternative media face unique challenges in the digital landscape

Resolutions and Action Items

None identified

Unresolved Issues

How to effectively regulate AI’s use of journalistic content without stifling innovation

Determining fair compensation models for platforms’ use of media content

Balancing the need for regulation with concerns about government involvement in media

How to define ‘journalism’ and ‘media’ in the digital age for regulatory purposes

Addressing declining trust in traditional journalism, especially among younger audiences

Suggested Compromises

Collective bargaining by media coalitions with platforms instead of individual deals

Focusing regulation on algorithmic accountability and transparency rather than mandating payments

Creating public sector funds financed by digital platforms to support journalism, managed in a participatory way

Developing self-regulatory frameworks within the journalism industry to address ethical concerns around AI use

Thought Provoking Comments

The tricky thing with AI is that facts are not under copyright and media companies, news reporters like us essentially report facts and there is copyright in how we write things but not copyright in what we write about because facts cannot be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts.

speaker

Nikhil Pahwa

reason

This comment highlights a key challenge in regulating AI’s use of journalistic content – the distinction between copyrightable expression and non-copyrightable facts. It introduces complexity to the discussion of how to protect journalistic work in the age of AI.

impact

This led to further discussion about the legal and ethical implications of AI systems using journalistic content, and the challenges of regulating this use.

We are seeing, for example, cases such as the Iceland. So the Iceland as a state strategically decided that it’s important for them and for the AI future in which we are entering for the language and their culture to be represented. So they willingly gave all they have in the digital data world for free to open AI, just to be represented in those models because they saw this as a priority.

speaker

Iva Nenadic

reason

This example introduces a new perspective on the relationship between AI companies and content providers, showing how some entities might willingly provide content to ensure representation.

impact

This comment broadened the discussion beyond just compensation issues to include considerations of cultural representation and diversity in AI training data.

I strongly believe that I do not want government in the picture and we do not need special treatment, we have to adapt as times change, we have to adapt from when we move from traditional business models to online business models, from online to AI but at the same time if someone is stealing our content we need to go to court to protect our rights in a sense.

speaker

Nikhil Pahwa

reason

This comment challenges the prevailing narrative of seeking government intervention and regulation, instead advocating for media companies to adapt and innovate independently.

impact

This perspective shifted the conversation to consider the potential drawbacks of government involvement and the importance of media companies’ own adaptability and innovation.

What’s also happening, you know, just to cover the complete situation, is that we are facing a new threat with AI summaries. What Google does on its search, especially because unlike traditional search, which used to direct traffic to us, AI summaries potentially cannibalise traffic, they don’t send us traffic anymore.

speaker

Nikhil Pahwa

reason

This comment introduces a new dimension to the discussion by highlighting how AI summaries are changing the dynamics of web traffic and potentially threatening media companies’ business models.

impact

This led to further discussion about the evolving challenges faced by media companies in the digital age, beyond just content use and compensation issues.

Overall Assessment

These key comments shaped the discussion by introducing nuanced perspectives on the challenges faced by media companies in the age of AI and digital platforms. They moved the conversation beyond simple issues of compensation to consider broader implications for copyright, cultural representation, business model adaptation, and the role of government regulation. The discussion evolved to encompass a more complex understanding of the interplay between journalism, technology, and regulation in the digital age.

Follow-up Questions

How to define media and journalism in the current digital landscape?

speaker

Eva Nenadic

explanation

This is a foundational issue for developing regulatory frameworks and policies to support journalism in the digital age.

How to ensure fair compensation for content used to train AI models?

speaker

Nikhil Pahwa

explanation

This is crucial for protecting the rights and sustainability of content creators, including journalists, as AI systems increasingly use their work.

How to address the ethical implications of journalists using generative AI to produce news content?

speaker

Juliana Harsianti

explanation

This raises important questions about journalistic integrity and the future of the profession in the age of AI.

How to revive trust in journalism, especially among younger audiences?

speaker

Eva Nenadic

explanation

Addressing declining trust is crucial for the future relevance and sustainability of journalism.

How can media companies innovate and develop sustainable business models in the digital age?

speaker

Nikhil Pahwa

explanation

This is essential for ensuring the long-term viability of journalism without relying on government intervention or subsidies.

How to balance the need for regulation of big tech with protecting free speech and media independence?

speaker

Nikhil Pahwa and Bia Barbosa

explanation

This is a complex issue that requires careful consideration to protect both journalistic freedom and the public interest.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.