WS #211 Disability & Data Protection for Digital Inclusion

WS #211 Disability & Data Protection for Digital Inclusion

Session at a Glance

Summary

This discussion focused on digital inclusion and data protection for persons with disabilities in the context of internet governance. Speakers highlighted the importance of involving persons with disabilities in policy-making and technology development processes. They emphasized the need for a more nuanced understanding of disability that goes beyond medical definitions and considers social barriers.

The conversation addressed several key issues, including the challenges of data collection on disability, the importance of digital accessibility, and the risks posed by automated decision-making systems. Speakers criticized the tendency to infantilize persons with disabilities in data protection laws and stressed the need for more agency and autonomy in digital spaces.

The discussion also explored the potential of local digital inclusion initiatives and the role of assistive technologies. However, concerns were raised about the unchecked optimism surrounding AI-powered assistive technologies and the need for greater accountability in their development and deployment.

Speakers highlighted the lack of representation of persons with disabilities in AI fairness conversations and governance policies. They called for more inclusive approaches to technology design and deployment that consider the diverse needs of persons with disabilities.

The importance of accessible education platforms and inclusive pedagogies was emphasized, with UNESCO’s work on guidelines for ethical AI development in education being highlighted. The discussion concluded with a call for more comprehensive definitions of disability in policy frameworks while also addressing concerns about potential misuse of disability classifications.

Keypoints

Major discussion points:

– The need for more inclusive data protection and privacy frameworks that consider the diverse needs of persons with disabilities

– Challenges in gathering accurate data on disability while maintaining privacy and autonomy

– The importance of involving persons with disabilities directly in technology development and policymaking

– Concerns about AI and automated systems potentially discriminating against or excluding persons with disabilities

– The need for a social model approach to disability rather than a purely medical one

Overall purpose:

The discussion aimed to explore how to make the internet and digital technologies more inclusive for persons with disabilities, particularly in the context of data protection and privacy regulations. It sought to identify gaps in current approaches and suggest ways to center disability perspectives in internet governance.

Tone:

The tone was largely collaborative and solution-oriented, with speakers building on each other’s points. There was a sense of urgency about addressing exclusion and discrimination. The tone became slightly more critical when discussing shortcomings of current policies and AI systems, but remained constructive overall. Speakers emphasized the importance of moving beyond surface-level inclusion to more fundamental changes in approach.

Speakers

– Fawaz Shaheen: Moderator

– Tithi Neogi: Analyst at the Center for Communication Governance

– Angelina Dash: Project officer at CCG and LU Delhi

– Eleni Boursinou: Consultant with UNESCO’s communication and information sector

– Osama Manzar: Director of the Digital Empowerment Foundation

– Maitreya Shah: Tech policy fellow at UC Berkeley, affiliate at the Berkman Klein Center for Internet and Society, disabled lawyer and researcher from India

Additional speakers:

– Dr. Mohammad Shabbir: Coordinator of Internet Governance Forum’s Dynamic Coalition on Accessibility and Disability, from Pakistan

Full session report

Digital Inclusion and Data Protection for Persons with Disabilities: An IGF Session Summary

This summary provides an overview of an Internet Governance Forum (IGF) session on digital inclusion and data protection for persons with disabilities. The discussion brought together experts from various fields to address key challenges and propose solutions for creating a more inclusive digital landscape.

Introduction

The session began with an acknowledgment of the collaborative document for best practices being compiled throughout the IGF. The moderator, Gunela Astbrink, emphasized the workshop’s policy of using people-first language and noted the accessibility considerations for the session, including real-time captioning and sign language interpretation.

Digital Accessibility and Inclusion

The discussion emphasized the critical importance of making digital technologies and services accessible and inclusive for persons with disabilities. Tithi Neogi, from the Centre for Internet and Society in India, highlighted the need for accessible consent mechanisms and digital services. Osama Manzar, founder of the Digital Empowerment Foundation, stressed the importance of involving persons with disabilities in technology development and service provision, stating, “Persons with disabilities must be part of the ecosystem in all aspects of research, data collection, and implementation.”

Eleni Boursinou, from UNESCO, addressed challenges in online education accessibility for learners with disabilities. She shared an example from Rwanda where including teachers with disabilities in digital skills development initiatives proved highly effective.

Maitreya Shah, a lawyer and researcher from India, pointed out the lack of representation of persons with disabilities in AI and technology conversations, highlighting a significant gap in current approaches to digital inclusion.

Data Protection and Privacy

The conversation revealed significant concerns about data protection and privacy for persons with disabilities. Angelina Dash, from the Centre for Communication Governance in India, criticized the tendency to treat persons with disabilities like children in data protection laws, arguing for the need to recognize their agency and autonomy. She advocated for the reintroduction of sensitive personal data as a category in India’s data protection law to provide additional safeguards for vulnerable data.

Maitreya Shah raised concerns about privacy risks associated with AI-powered assistive technologies and data collection practices. This point was further emphasized by audience members who expressed worries about private data of persons with disabilities being used to train AI systems without proper consent or oversight.

AI and Automated Decision-Making Systems

The discussion highlighted the potential risks and challenges posed by AI and automated decision-making systems for persons with disabilities. Maitreya Shah pointed out the dangers of bias and discrimination against persons with disabilities in AI systems, noting that his research at Harvard revealed how AI fairness metrics and governance policies often explicitly exclude disability from their scope.

Eleni Boursinou discussed UNESCO’s work on guidelines for ethical AI development in education, emphasizing the importance of considering the needs of persons with disabilities in these frameworks. The speakers agreed on the critical need for disability representation in AI fairness and governance conversations to ensure that these technologies do not perpetuate or exacerbate existing inequalities.

Disability Data and Definitions

A significant portion of the discussion focused on the challenges and limitations of current approaches to disability data collection and definition. Maitreya Shah criticized the medical or impairment-based approaches to disability data collection, arguing for a shift towards a social model of disability aligned with the UN Convention on the Rights of Persons with Disabilities (UNCRPD).

This perspective sparked a debate about the need for more inclusive and comprehensive definitions of disability in policy frameworks. However, audience members raised concerns about potential misuse or impersonation if definitions became too broad, highlighting the complex balance required in this area.

Conclusion and Future Directions

The discussion highlighted the complex challenges in achieving digital inclusion and data protection for persons with disabilities. It emphasized the need for more inclusive approaches to technology design, policy-making, and data collection that center the perspectives and needs of persons with disabilities.

Dr. Mohammad Shabbir, an audience member, contributed to the discussion by emphasizing the importance of considering the diversity of disabilities and the need for tailored approaches in technology development.

The session concluded with a reminder about the collaborative document for best practices and an invitation for participants to contribute their insights. This document aims to compile practical strategies for improving digital inclusion and data protection for persons with disabilities, serving as a valuable resource for future policy and technology development efforts.

Session Transcript

Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to all our on-site and our online participants. For our on-site participants, it is channel number 2. You can get in on channel number 2. And we also have a small, I mean, if you, as the session progresses at some point, you can come and check the workshop policy. And there’s a shared document that we can all be working on. We’ll talk more about it as the session goes, but at the front of the desk is my colleague, Nidhi. She has those QR codes. If anyone wants to scan and get those documents, you can do that. Now, before we start, I would just like to do some housekeeping and check if our online participants are able to speak and come on. So Maitreya, can you unmute? Or Angelina, can you unmute them one by one and see if they’re able to speak, if you’re able to hear them? Sure, I’ll just. Hello, hello. Angelina, you’re audible. You’re audible, thanks. Yes, morning, morning. You’re audible, thank you. Yeah, yeah. Hi, am I audible? Yes, yes, you’re audible. Thank you so much. I hope everyone’s able to hear them. Now, as I said, this is a session where we would like to talk about some of the most invisibilized conversations, some of the conversations that we often miss out. This is an opportunity to discuss how to make the internet more inclusive in a broader sense, but also more particularly as we all move towards establishing data protection regimes, establishing privacy regimes, including those of us in India. This is a chance to have a conversation about how data protection and privacy regimes can make the internet more inclusive instead of more exclusive. That’s the basic idea, the basic sense with which we’ve started. For our session today, we have a workshop policy to make sure that people are able to access us, people who are joining us both online and on site with diverse abilities, with different kinds of disabilities, they’re able to experience this session as well as the rest of us. So just a couple of pointers. We are requesting all of our speakers to briefly describe their physical attributes in their own words. For instance, I’m Fawaz, I’m a bearded man, kind of big, and I’m wearing a gray jacket today, I have short hair. Just something brief like that, whenever you’re speaking, it’ll be helpful to bring everybody in to make it a little bit more of an inclusive conversation. So thank you in advance for that. And apart from that, we have, if you want a detailed look at the workshop policy, although I’m sure there’s nothing very difficult, there’s nothing extraordinary that we’re asking. It’s just basic stuff, being respectful, being inclusive, bringing everyone into the conversation. What we do have the workshop policy, the QR code is with Nidhi. You can scan that, take a look. You can also use it for your own sessions, customize it, give us feedback. We also have another document during the Q&A session. We’ll show you that document. It is a shared Google document in which we’d like to build some code of best practices for inclusive internet. So that’s something we’ll invite all of you to participate in. For the online participants, my colleagues, Angelina and Tithi will be sharing them in the chat. For the onsite participants, my colleague Nidhi has a QR code that you can scan and you can get access to. Now, before we begin, before I introduce, this session has been organized and conceptualized by my colleagues at the Center for Competence, Angelina and Tithi. You can see them both on the screen right now. And they’ve also recently authored a policy brief on disability and data protection, which looks particularly at the new law in India, and it looks at its interaction with the Persons with Disabilities Act. And it tries to give a sense of where we are standing and an overall position. So before we begin, I would like to request Angelina, Tithi, just take 10 minutes, walk us through your brief and lay out what you envision for this session. And after that, we have an excellent panel of speakers. I’ll be introducing them one by one and we’ll take this forward. Thank you.

Tithi Neogi: Thank you for that introduction. Hi everyone, I’m Tithin Yogi. I’m an analyst at the Center for Communication Governance. My pronouns are she and her. I wear glasses, I have wavy hair and I’m wearing a red hoodie today. Over to you, Anjana, you can introduce yourself.

Angelina Dash: Hi everyone. My name is Anjali Nadash and I’m a project officer at CCG and LU Delhi. My pronouns are she and her and today I’m wearing a red jacket and I have wavy hair.

Tithi Neogi: Over to you, Tithin. So today’s session is based on the policy brief on data protection for persons with disabilities in India that, like Fawaz mentioned, Anjana and I have co-authored and have been working on for a while. So while India has indeed taken a step forward towards inclusion of persons with disabilities in its data protection framework online, we have identified some gaps in its nuances and we have tried to plug in some loopholes to sort of further advance the digital inclusion for persons with disabilities. So some common themes that we have identified in this data protection framework, specifically for persons with disabilities, are the themes of digital access and inclusion, data autonomy and data protection. So I’ll start off with something on what we have found on digital accessibility, digital access and inclusion. Specifically in our research, we have identified that digital accessibility is a precursor to persons with disabilities giving meaningful consent on the internet. While the disability rights framework in India guarantees the right to accessible information and communication technologies, persons with disabilities, the data protection law does not really mandate the data fiduciaries to operationalize or implement consent mechanisms that have accessible interface and can be used easily with persons with disabilities. Also, another thing that we have identified through our research is that data protection law allows guardians of persons with disabilities to give consent on their behalf. So that reduces or takes away the onus of giving consent from the person with disability and shifts that to the guardian of person with disability. And this, in turn, takes away the incentive that data producers might have had to make their consent mechanisms more accessible towards persons with disabilities, because now it’s not the persons with disabilities who are giving consent directly on these consent mechanisms, but it’s their guardians. So based on our findings on digital accessibility, we recommend making notice and consent mechanisms compliant with some accessibility standards and compatible with assistive technologies, use of audiovisual formats in these consent mechanisms, electronic braille, et cetera. I’ll hand over to Angelina to continue this discussion on digital accessibility. Over to you.

Angelina Dash: Thanks, Siddhi. So I think another concern that we have in terms of access and inclusion is in the context of limited access to digital services without the consent of their lawful guardian. So under the data protection law in India, persons with disabilities require the consent of their lawful guardian in order to access digital services on the internet. This is concerning because of two scenarios that may arise. What do these scenarios look like? In the first case, we have persons with disabilities who cannot access digital services where the support and consent of the lawful guardian may not be required at all. For instance, maybe something like a digital encyclopedia. In the second case, we have persons with disabilities who are hindered from accessing digital services which provide vital disability resources like perhaps online communities for persons with disabilities. Access may also be curbed in cases where a conflict of interest may arise between the lawful guardian and the person with disability. This could include access to digital helplines for physical and sexual abuse among other digital services. So our recommendation in this context is that persons with disabilities must be allowed to access digital services on the internet without the consent of their lawful guardian in certain cases. Over to you Tithi.

Tithi Neogi: Thanks Anjalina. I’ll now speak about the second theme that we have identified and that is of data autonomy. So in India, indirect consent through another person is collected from two kinds of data principles i.e. children and persons with disabilities. So in the Indian data protection law, children and persons with disabilities have been clubbed together and subjected to similar treatment while giving consent i.e. their parent i.e. in the case of children or their lawful guardian in the case of person with disability gives consent on their behalf. This treatment leads to infantilization of persons with disabilities on the internet since their guardians are now being equated to parents to the status of parents which is not the case because if we look at the disability rights framework in India, a lawful guardian for a person with disabilities is envisioned to be somebody who aids them or assists them and helps them with decision making process in a mutually consultative framework. So in the data protection law that India has right now, lawful guardians of persons with disabilities have that ability to give consent on their behalf completely without accounting for any mutual decision making. So in our research, we have recommended decoupling persons with disabilities from children and introducing a separate provision in the data protection law that defines persons with disabilities. This definition for persons with disabilities could take into account temporary disabilities, the degrees of support, the various degrees of support that a person with disability might need. We have also suggested that any consent collection processes online be informed by a consultative framework between persons with disabilities and their guardians, which is driven by principles of mutual decision-making. I’ll now hand over to Angelina to speak about the third common theme that we have identified. Over to you.

Angelina Dash: Thanks, Tithi. So I think another gap, as you mentioned previously as well, that we identified was in the context of data protection, specifically with regard to the absence of sensitive personal data. So the data protection law that was introduced in India last year came after a long consultative process, and previous iterations of this law had carved out sensitive personal data, or SPD. This was not included in the final law. Now, firstly, what is SPD? SPD was a distinct category of personal data warranting additional safeguards. And what did these safeguards look like? These include specific grounds for processing, including grounds like explicit consent or specific purposes of state action. Now, it’s important to question at this stage, why does the need for a separate category of SPD arise in the first place? So there are some concerns which scholars have been raising regarding sensitivity of data being used as a basis for categorization. However, we feel that India’s data protection framework is currently at a very nascent stage. And additionally, certain data of persons with disabilities can be more vulnerable, like health data of… financial data, and this data is more susceptible to misuse for the purpose of discriminating against persons with disabilities, particularly in terms of employment, health care and social welfare. Therefore, our recommendation in our policy brief is that sensitive personal data as a category should be introduced for personal data within India’s data protection law. And with this, we’d now like to talk about moving from the policy brief to today’s IGF session, where we’ll be continuing the discourse on centering disability in internet governance more broadly. So through our insights from working on the policy brief and lived experiences from stakeholders who are persons with disabilities, we have gained an understanding of how there are certain gaps in internet governance discourse with regard to disability. And I’d like to hand over to Tithi at this point, and she’ll elaborate a bit more on the gaps we identified. Over to you, Tithi.

Tithi Neogi: Thank you, Anjulina. So some of the loopholes that we have discovered in the discourse on data governance globally as well as in India, and we would really like the insights from the participants today to share their experiences and what they feel about this. So the first kind of loophole that we saw was that this explicit categorization of persons with disabilities as data principles in the Indian data protection law, this happens to be an anomaly of sorts because we haven’t really come across as this distinct categorization of persons with disabilities as data principle in any other major statute which has been enacted. So this is something that we have been discussing as to whether this separate categorization of persons with disabilities, whether this was a good measure, is this going to benefit persons with disabilities, is this the right approach or the wrong approach. We are aware that the GDPR refers to of certain vulnerable classes of data principles but does not mention persons with disabilities exclusively. We would like to take this opportunity to get insights from the participants as to whether they feel that the GDPR approach is the way to go or whether the Indian approach of specifying a certain category of data principles as persons with disabilities and having specific measures with respect to consent selection from them is that the approach to go. We are also aware of discussions on the presence effect on data governance and whether GDPR would be a good influence or is the novel approach that the Indian law is taking is this the way forward. We would really like to hear some discussions on this from our speakers as well as from the participants. I will now hand over to Angelina to discuss some other loopholes. Over to you Angelina.

Angelina Dash: Thanks Tithi. I think another loophole that we identified was the lack of a global south or a global majority perspective in discourse regarding disability in internet governance. The persons with disabilities are not a homogenous group. Members of this community especially those from global south or global majority jurisdictions like India can often face unique and diverse challenges in terms of internet accessibility and data autonomy. This may arise from intersecting marginalization in terms of gender, caste, poverty, illiteracy as well as broader infrastructural concerns and digital divide. Internet governance discourse currently does not adequately account for these challenges and that’s where we come in and we aim to highlight some of these issues through the discussions in the workshop today. And with this context and background that we’ve just provided and the gaps that we have identified we intend to build upon the work in our policy brief and sort of extend the conversation to also address disability and data protection in the context of AI and automation. decision-making systems. We hope to use this session as a forum to facilitate multi-stakeholder conversations and collaboration. This will enable the co-creation of best practices towards digital inclusion for persons with disabilities. With this, I would like to conclude our presentation and I now open the floor for any questions. Fawaz will be assisting us in this Q&A round.

Fawaz Shaheen: Thank you. Sorry, I think we’re doing a slight change in the format because we’re running a little short of time. So, what we’ll do is we’ll take all questions. We’ll have two rounds of questions. So, the presentation questions also we’ll take after the first round. We just haven’t changed on the fly, sorry. But now, without further ado, let’s move towards the conversation. Thank you for starting the conversation. We’ll now move towards the discussion. And first off, I would like to invite our on-site speaker, Dr. Osama Manzar, to please join me on the stage. We also have two excellent speakers joining us online. We have Eleni Borsino, who’s a consultant with UNESCO. And we also have Maitreya Shah, a lawyer, researcher, currently a tech policy fellow at UC Berkeley. I’ll be introducing all three of our speakers in more detail as I ask them questions. But also, just to remind all of you, the discussion, we’d like to have it as conversational as possible, as open-ended as possible. So, we’ll be doing a first round of questions. We’ll have four to five minutes each for each of the speakers. Then we’ll open it for questions. I encourage all our on-site participants to please ask all our speakers, as well as Tithi and Angelina, about this topic, about this issue. And also, all our online participants, please feel free to put your questions in the chat. Tithi, who’s our moderator online, will be taking those questions and relaying them to us for our speakers. So, after the first round of questions, round of speakers and questions. We’ll have another round, and we’ll end with a round of interventions. So we have about one hour left. 60 minutes is a good time to do this, I feel. And now to begin the conversation, I think I’ll first invite Eleni, who’s joining us. Eleni Borsino, who’s a consultant who works with UNESCO’s communication and information sector, especially on universal access to information. And Eleni, if you could unmute yourself. We would like to begin this conversation by asking you, what role do you think digital accessibility plays in furthering sustainable development goals? Especially working from your own experience, what would you describe as the role of digital accessibility and inclusive design in enhancing digital autonomy? Not just access, but also digital autonomy for persons with disabilities. But Eleni, over to you. And thank you for joining us. Thank you. Thank you for having me and for the invitation. I’m very honored to be in this panel today.

Eleni Boursinou: My name is Eleni. My pronouns are she, her. Today, I’m wearing a very Indian shirt, and I have wavy hair. You can, I mean, I don’t know if my camera can be switched on. But if you want, you can switch on. I think you can switch on the camera. Titi, can you just check? So thank you very, very much for this. So and for the very meaningful question. So the digital accessibility and the SDGs, it plays a very critical role by fostering inclusion and equity, and particularly for what you call persons with disabilities, but in general, any marginalized groups. So by removing usability barriers, it bridges the… digital divide, enabling participation in the digital economy and what we call in the UNESCO communication and information sector, the knowledge societies. So this supports the IGF and the global digital compact agenda of leaving no one behind and aligns with the UN Convention on the Rights of Persons with Disabilities, promoting social justice and equity. On the other hand, there are also accessible tools such as OER and UDL that play a significant role in empowering education, that is SDG 4, reducing dropout rates from school and enhancing educational outcomes and promoting lifelong learning. Additionally, accessible digital solutions can address challenges related to gender and disability, that is SDG 5, and can empower women and girls with disabilities to access education, employment and leadership opportunities. And finally, for SDG 17, that is international collaboration, the accessibility standards can provide open solutions that contribute to global cooperation, fostering access to information and promoting partnerships. So in the context of data autonomy, digital accessibility and inclusive design play a huge role in enabling individuals with disabilities to engage with and control their data. Accessible data systems and platforms allow users to interpret and manage data independently, ensuring that everyone, regardless of ability, can participate in data-driven decision-making. What we call in UNESCO open solutions. That is, solutions that are cost-effective with open licensing, it can be free and open source software or open educational content, and OER platforms provide resources that empower individuals to control their data and engage in lifelong learning. Digital formats and assistive technologies enhance understanding and trust in data-driven systems, while universal and inclusive design mitigate biases in automated decision-making, ensuring fairness and safeguarding marginalized communities from discrimination. So the key takeaway for embedding is that embedding digital accessibility and UDL principles in policies and practices can ensure equitable participation in the digital economy and knowledge society. And first, by improving data collection and analysis, AI can support more inclusive and equitable decision-making, ensuring that marginalized communities, including those with disabilities, are considered in policies aimed at achieving the SDGs. And second, a human-centric approach to AI and digital tools, essential for ensuring that the benefits of AI are distributed equitably and contribute to sustainable development goals, including promoting data autonomy and accessibility. So that’s all for me for now, but I’ll be happy to answer any questions you might have.

Fawaz Shaheen: Thank you, Lenny. We’ll come back to you. And now I’d like to go to our on-site speaker, Osama Manzar. Osama, as we know, is director of the Digital Empowerment Foundation, and he works with a large community network of digital fellows who are not just working on… on improving access to internet, but also on who have a mandate effectively to train people, to work with people, to make the internet a safer and more inclusive space. And today, particularly, we are very happy to have Osama with us, because recently, DEF has also come out with a report that is looking at ICT for empowerment, inclusivity, and access. And it’s a report in which they’ve spoken to more than 250 persons with disabilities and mapped the various challenges, also the opportunities. And so I would like to invite Osama to share some of those things. But also, to start off, I would like you to talk a little bit about doing this kind of work when you look at the challenges that are associated with gathering data on disability, talking to persons with disability, while also maintaining autonomy, maintaining anonymity where it’s required, doing that and balancing it with the need for having good data to work on disability. This becomes an even more urgent question when it comes to issues of census. One thing that your report is talking about is the need for a new disability census in India. So some of those things, I know it’s a very broad question, but I’d like to start with that and we can move on from there.

Speaker 1: Yeah, thank you, Fawaz. I will focus my discussion more on the ground realities and also with the entire community of PWDs or whoever we call it. We have only four or five minutes, so I’ll say that there are three things very important. One is that we treat our people with disability as subjects. you know, as, or I don’t want to say object, I would have said object, but more like somebody about whom we need to do something where there is no role play from themselves, right? Yeah, so exactly. Like we can, you know, exclude it and all that. So that’s a very behavior of the doers, whether it’s corporate sector, government sector, or able people, all philanthropists, anybody. You know, everybody thinks that there’s something need to be done. It has been done for so long that if you talk to the disabled people or people with disability, they feel like somebody will do something someday, you know, and we are just waiting, you know? So the whole ability to come out, to put their, assert themselves, demand, or, you know, ask for accountability is almost negligible. You know, almost negligible. I would say that they are not treated better than any other poor people in remote areas, or the people who are caste-wise or class-wise treated absolutely downtrodden. So I’m coming also from the perspective of large scale. We have about, what, 50 million population? Even more, about 5% of the population of India is people with disability. It’s as equal as the indigenous community. They are also about 5%. You know, that’s a huge, huge number in India. With that one, and then when we started seeing that in the last 20 years, digital’s development was becoming more like an enabler, and an automated enabler for people with disability. And let me also say, disability also see like, now, if I don’t have access to information, I am more able with digital access device, right? If I don’t want to talk to people, then I have something to talk online, and there is this. So it enables me, my confidence, my requirement. and so on and so forth. At Digital Empowerment Foundation, we realized this by going on the ground, that even though when we were going on the ground to provide digital access or infrastructure, we were seeing that disability was almost like invisible. You know, why were they invisible? They were not coming forward. We are not including them. We are not even thinking about them. All the government entitlements that we ought to deliver for them, they are not even coming to take it. We don’t know how to talk to them. And then we also realized there is a lot of dogmatic, you know, traditional look down upon kind of behavior in the community. We look at them with a lot of distance, you know. We don’t want to be in close conversation with them or touch them or feel them. They are considered as curse on the society, you know. That’s the last thing that one can imagine if you want to do a multi-stakeholder way of growing things, you know. They are not even part of the stakeholdership. So then we, I’ll just, next one minute I’ll, what we did is that we started talking to them and including them in doing, rather than talking about what’s your problem and what’s my problem and all that. We thought that, okay, this person is doing connectivity. You can also do connectivity. If this person is running a access point or a public access point, you can also run an access point. If this person is accessing computer and finding a payment for somebody, you can also do that. So everything that we thought anybody can, like anybody, if disabled person can do, we started having, working with them. So they become part of the working ecosystem and suddenly they became social entrepreneur or entrepreneur or a provider. So earlier their life was seeker. Now they are more like provider. And we just created not one, but 300, for example, now 500 people who run digital centers, digital access points in village level. and they provide service to all the able people, actually. And then what we learned from them is that now you can talk about disability, you can talk about their miseries, you can talk about their requirements, and so on and so forth. And that actually brought us at par, that now on an equal footing to discuss, right? And then we thought that can we replicate this model? So what I am now coming to is the last part, how if you, now they are able people for providing service, for talking, for infrastructure development, for digital access, everything, right? They also have more knowledge than others because they also know extra about the special ability to serve the disabled people. And then we did this entire research and found out that, oh my God, government has not done the census for people with disability for so long. Why we don’t have that? Now we are asking them, you demand it rather than we demand for you. Because we are also going to demand, we are going to demand for the census for the whole country, right? Which is also not done. But can we have a special only for that one? Yeah, it’s about 15, 20 years actually. Two decades. Yeah, yeah. One decade is lost and another decade has not come. So how can they want? The second thing, they started telling that all the government facilities are actually not implemented on the ground. Like government says that ICT enablement for access of people with disability, the whole digital centers at a public access point is not even able for that. It’s not wheelchair friendly. The point is that actually digital inclusion in our country and many other countries is make even the normal people disabled rather than the disabled people. They can’t type, they can’t go to the place. You have to do extra. You have to become absolutely audiovisual. And those kinds of things have started coming in between. Third is that being people with disability, they have more network on data about their own community, right? So they can become a secondary source of information about the spatial lag or spatial abilities of the people with disability. And that we must take advantage is one of the recommendations that we have done. And the last part is that data in any case for anybody is very important, right? About the protection. Why it is more protection-oriented approach required for people with disability is because of their extraordinary deprivation way of looking at them. So therefore you have to be more protective about them. You have to be more, you know, make sure that their participation, their sense is required. And the last part is that do not treat people with disability with, you know, let’s say mental disability, you know, like our researchers clearly said that government has created a legislation where you are infantilizing their ability. I mean, they are very much able, but why somebody else should take a decision on their behalf, you know? Just that because they cannot walk or they cannot do by hand? No, that’s not fair. You know, from that point of view, the whole illiterate people in our country, about, you know, 40% of the population who cannot access internet is disabled for accessing internet, just because you are not digitally literate. I mean, how can you do that? So that’s very, very important. But my last, you know, point in this one is that we must do conversation, action, intervention with people with disability in everything, in everything, whether you are doing research or data collection or doing something on work, they must be part of the ecosystem. Then only it becomes complete, you know, rather than we say that only conversation need to change, only something needs to change. They must be, it’s something like we say, don’t do mannel, always do panel, which must have a woman. Similarly, can we do that? There is always a panel with one person with disability, at least, you know, can you always have your discussion with one person with disability, always being part of it so that we normalize participation of disabled people into the normal conversation is what I would like to say that.

Fawaz Shaheen: No, I think that’s an excellent point. And it’s important also to highlight how even sometimes very well-intentioned interventions by civil society, by human rights actors can also be very infantilizing, patronizing. I think the point you’re making and the same point that Titi and Angelina were also making. And now on that note, I would like to move to our. Our next speaker, Maitreya Shah. Maitreya is currently a tech policy fellow at UC Berkeley, also an affiliate at the Berkman Klein Center for Internet and Society. He’s a disabled lawyer and researcher from India, and he has a unique insight into the challenges that persons with disabilities face with regards to digital access and digital protection. I would also encourage you to actually go and read some of his writings. You can find a lot of them at his page on the Berkman Klein Center, as well as on the UC Berkeley website. Do read some of his writings. But Maitreya, I would like to thank you for joining us. And also, I’d like to start by asking you, considering the increasing digitization of essential services, what are the gaps in legislation in India regarding the rights of persons with disabilities in the context of data protection? And also, how can policies that we adopt encourage user-centric approaches in the development of technology that accesses persons with disabilities? Again, a sort of broader question, but we would like you to come in on that and take us through some of these perspectives with your insight. Maitreya, if you’re able to introduce yourself. Can you hear me?

Maitreya Shah: Yeah. Hi. Yes, we can. Hi. Thank you for having us. Thank you so much for inviting me. And my pronouns are he, him. And I’m wearing a button-down with a large pair of headphones. I just started my camera. Hopefully, you can see me. I am an Indian with curly brown hair. Sorry, curly hair. So I think this is a great question. Yes, we can see you now. Thank you. Thank you. So I think this is a great question. And I’m wondering what sort of legislation I can talk about that we see. And as I said, I’ve already covered the issues of the current legislative frameworks in India. I think I’ll speak about digital accessibility in detail. protection a little broadly in the Indian context. So I think to start with, you know, India has always had this issue of privacy for people with disabilities when it comes to digital technologies or even other forms of emerging technologies. A lot of my recent work has been on how, you know, Aadhaar, the biometrics based system of India has, you know, not adequately considered the privacy and accessibility implications on people with disabilities. You know, Aadhaar scans and fingerprints that Aadhaar collects often exclude people with disabilities because the algorithms and the infrastructure that they use, quote unquote, treat people with disabilities as outliers or as non-normative. So in India, I think this issue has been long standing. Aadhaar started, you know, the project, we started way back in 2009, implementation 2012. And even as of today, people with disabilities are facing issues in rolling with the technology and also authenticating their identity, you know, especially with public services such as cash transfers or welfare programs. So this has been an intrinsically, I think this issue is long standing. But coming to kind of more legislative, more legal, more policy issues on this area, you know, as Titi and Angelina rightly said, the data protection law of India treats people with disabilities as or equally with children. And I think there are several issues with this. To start with, you know, the Rights of Persons with Disabilities Act, when it was enacted, it in a way takes precedence over other legislations when it comes to disability matters, especially. issues such as guardianship. I think my question is if the data protection law even has a prerogative to, you know, design a consent framework where guardianship and other very complex social legal issues are involved. You know, it is the, I think, the disability law that has done this efficiently and I think which is the right framework to address these issues. But the data protection law suddenly starts multiplication of, you know, these legislative provisions and starts adding new complexities for people with disabilities, right. And I think, you know, Titi and Angelina raised this very pertinent question as to whether the GDPR approach is correct or an Indian approach is correct. And I think this is again a very complicated question because, you know, India, although borrowed heavily from GDPR for the larger legislation that we enacted, when it came to disability, we, I think, did a lot of innovation when we brought this provision. And I think probably, you know, as I said, we don’t need a separate consent mechanism or a separate provision for people with disabilities at all. We might be good with the GDPR way of doing things because, you know, I think the idea is to respect the agency of people with disabilities, not develop new consent mechanisms. Instead, a focal thing on, you know, making your technologies privacy preserving, making your technologies more accessible, you know, seeing that technologies do not unlawfully access a disability or health information of people with disabilities. I think it’s more on the, you know, the focus is to shift

Fawaz Shaheen: from user to data fiduciaries or the corporations that are building technologies. I think in India, one of the biggest problems that we have had through our legislations is that a lot of onus is placed on people with disabilities. And with privacy, you know, a lot of, there’s been a lot of of scholarly research and a lot of criticism on this individual privacy frameworks, you know, individual privacy might not be the best way forward, especially with AI and emerging technologies, where usually user agency is already curtailed. So I think we we need to think about making your technologies more privacy preserving rather than keeping, you know, putting additional complexities on people with disabilities to, you know, coordinate with their guardians and then, you know, work on consent to access even the very basic necessities, you know, because I think digital access and internet access is now a necessity. And I can give you an example of how this is actually playing out on the ground. Recently, I was doing a training session for assistive technology manufacturers in India, and who are specifically building technologies for people with disabilities. And a lot of them told me how this this the guardianship provision in this, this complex consent based provision, the data protection law is, is raising many issues for them, because it’s not giving them adequate opportunity to provide their technologies to people with disability. They also wrote it, wrote to the government saying that, you know, this, this provision needs an amendment, specifically when it comes to assistive technology. I think this is this is one very broad issue. The other broad issue is that there has been this this inherent trade off between, you know, accessibility and privacy. So, at times people with disabilities are compared to give away their privacy to access digital technologies. To give you an example, I earlier this year, I written an article on how these companies that are developing automated tools, claiming that these automated tools can fix websites, make them accessible without changing the source code of it. And how these practices are inherently deceptive. So what it essentially does is it, you know, you know, in the garb of making websites accessible, it violates user privacy, because these overlay tools, as they are called, can infer disability status of individuals through, you know, collecting data on the screen reader usage, or their use of magnification device, so on. And so there has been some conversation on this outside India, especially in the United States. In India, this conversation is not happening. And why this is particularly problematic when it comes to privacy is that the adoption of these technologies, whereas it is, you know, it is, it is, in a way, restricted now in the United States, companies and even the government are increasingly adopting these problematic technologies in India. And to give you an example, you can go back and check this, you know, the India

Maitreya Shah: AI portal managed by the Ministry of Electronics and Information Technology, a quote, unquote, the knowledge platform for India’s AI economy, itself uses an accessibility overlay tool. So I think my question is, you know, we need to think about this comprehensively, digital accessibility is not a tick box solution. You know, we have to think about this a little comprehensively, especially when it comes to privacy. And there are additional challenges due to AI and emerging technologies that we are seeing that are posing, you know, issues for people with disabilities. And I think thirdly, I’ll very briefly touch on the issue of, you know, India’s digital accessibility laws are still very nascent. We have not adequately made our websites for that the government or private accessible and I think it’s also quite singular you know in a way that a lot of our digital accessibility frameworks in India focus on accessibility for people who are blind but does not adequately cater to people who have other forms of disabilities. So you know there are challenges of accessibility and privacy even for people say who have learning disabilities like dyslexia or who are autistic or who have other intellectual disabilities and I think our laws are still quite you know nascent on that front that we’re not adequately thinking about a cross disability approach when it comes to accessibility and privacy. So I think with India there are many issues at a larger policy level there are you know people with disabilities are not adequately represented in the conversation. There is a duplication and a lack of harmonization in our regulations and you know I think we’re not we’re not adequately regulating the larger sector when it comes to preserving privacy of people with disabilities and ensuring their accessibility at the same time. I’m happy to talk more about this in the question answer and so on.

Fawaz Shaheen: Thank you Mithra. I think that’s a good point to pause and get our first round of questions and observations. I know some of the conversation has been very India centric but these are problems which are quite universal and we have we are fortunate to have a lot of on-site participants from different areas different countries so if questions or any observations any intervention from your perspective we’d welcome that. Tithi also please feel free to tell us any questions from the chat from our online participants but yeah anyone has a question here or an observation? Yes.

Audience: Thank you for this opportunity. My question is to our in-person speaker. Since he tried to be practical, I am coordinating the implementation of the African Union Data Policy Framework. And as part of that, we are several stakeholder sessions as part of supporting some African countries to develop their national data policy. And our project wants to be participatory and inclusive. But there is a challenge when engaging persons with disability. So I want to know from your work, what is the best approach when it comes to engaging persons with disability, if there are any recommendations that we can also try? Thank you.

Speaker 1: OK, very practical question. And thank you, because I have a good answer. So I’ll just let you know one typical, I’m sure, African country. Which country did you say that you are from? I am from Ghana, but my work covers several countries. OK, so I’ve been to Accra, so I can see some of the villages in Accra or in Ghana and many other African countries. So listen, imagine a village where you go to just generally, people are not connected. People do not know computers. And they still need to educate themselves on computers. There are still people there who want their work to be done or withdraw their money or anything, like hundreds of services that you avail is now digitally done, right? That’s the scenario everywhere. can see. Now, when you go there, you try to see that how we can provide all this service to the people, right? It’s very simple, like, or whether you go to a school and you do the, you know, a lab, established a lab, or you create a public access point where people can come and withdraw money or file an application or whatever, whatever. What we did is that to do that, we found out is there any disabled person there, preferably woman, you know, who is ready to learn, not educated, not necessary. That’s not the basic qualification. Are you ready to take a chance? Are you ready to sit on a computer? Are you ready to handle computers? And all this service that we are talking, you provide through this center, you know, or this facility. So suddenly, so your job was actually just to find out a person with disability and with an intention to serve people, right? And that itself was such empowering thing, because now the disabled person who can’t even move and sitting and giving you education on computer or switching on and off computers and letting you know that this is the way you can work, this is the way you can fill a form, this is the way you can access YouTube, this is the way you can do something, right? So digital became an ammunition from that person to provide a service. Earlier, he was or she was sitting at home and waiting for somebody will do some favor to me, right? Suddenly, and you do this from your home, so you don’t have to go to a shop. So everybody is coming there, you know, to get those service. For that same service, people used to go to faraway places like two kilometers, five kilometers or 10 kilometers, which is very expensive. And also, if you are very highly educated, then you are also very arrogant, you know, and this person is very humble, you know? And then the less, most of the people also start coming there because they think that, oh my God, this person is doing so many things, why don’t we go there and get it done from this person? And that one example become a viral thing for the rest of the people, imagination change. And this we did not in one, now we have out of 2000 locations, 500 locations are run by. by the disabled people. So the whole narrative change, the area change. And then when you ask those questions, which I’m not going to share now, maybe if I’ll get a chance, I will share with you that what are the statistics, numbers say, because 85% of the people are with mobility disability. So you don’t have to think for another 15% to be done while you can do 85% of the things. Why don’t you just first do the low-hanging fruit? Just the mobility people, just catch them. They are the visual representative of the people with disability. So those are the things that we did and we can share with you the whole research that we have done with all these 500 people and how it can actually become a replicative initiative in many other countries.

Fawaz Shaheen: Thank you. We would also encourage more participants to come up with their questions, interventions. Before we move to the next round of discussion, any questions from the online chat? Otherwise we can move to the next round of discussion and come back also. There are no questions on the online chat, but there’s one comment from Emily commenting on Osama Manza’s point on involving persons with disabilities in the process. We could ask Emily to come in and… Is it better now? No, there’s still an echo. Can you ask Emily to come in? Hello? Yeah, I think we are better now. Sorry for that. Sorry for that. A bit of a disruption. But Eleni, could we ask you to come in? And I think there was a comment that you had. Oh, I’m on mute. Thank you. Yes. So, I was saying that it is very interesting. The point is, I think, in the meanwhile, we can continue the conversation here, especially because you already asked the question about, you know, the work that you’re doing at the digital center. The next round of discussion that we’re having is around automated decision making. But before we go there, there is the question that we wanted to ask Maitreyi also, and start with you, Mr. Manza, maybe. Because, is it? Yeah, I think there’s some problem. Should we wait a couple of minutes or we can continue? Okay. Yes. What I wanted to ask you was if we could understand a little more about setting up a local digital inclusion initiative, because it’s one aspect of what you’ve been talking about, empowering people, making them providers and enablers instead of, you know, being a recipient. But there is also the question of the data around disability is also extremely skewed. We don’t have data that adequately represents how persons with disabilities are also a diverse group. So the picture of who is a disabled person is a very settled. very stereotypical picture if I may say. But through your experience, through your work and including the interventions in the field as well as the research work, you have of course a much better See at a government level or we are trying to design intervention at a larger level. How do you think this diversity among persons with disabilities can be approached? How can that be accounted for?

Speaker 1: So I would say that it is actually innovation on, I would say, the data collection also and the data contextualization. You know, for example, let’s say if we have a diversified, if we have a data which shows the diversity of various kind of disability, let’s say. Now the tick box is what are the disability, but there is no tick box on what are the abilities, right? Because you have to ride on the abilities with the same person which is having disability. So in, I mean, what I’m giving example is that let’s say my hands are not working, let’s say. I’m one of the disability. But should we only work on making that hand work or should we work that your mouth is working, your voice is working, your eyes are working, your mind is working, your other parts of the body is working and how those working and the devices and the able devices can actually make it not even think about my hand. It’s something like we used to use remote control which is handled by hand, but now all the remote controls are voice enabled, right? So voice controlled. So I am controlling all the remote controls by voice. So I don’t need my hand for managing. I’m just giving one example. So rather than thinking that, you know, I will create, you know, something non-handheld. you think about what are the things, voice-enabled, eyes-enabled, visually-enabled. That is where the real innovation and contextualization is very important. I would like to say that while I’m not denying, and it must be that our census or data collection on the diversified people must be done. Beside that, we should have equal effort that what are the already available ability-oriented things which is available which can contextualize. For example, can we have a list of everything that a mobile phone does, which is directly related to the people with disability? Because mobile device is the most able device among all the other devices in terms of making you work in something like it can transliterate, you don’t have to see and it will speak out your messages. There are many other things that the mobile gives you, and in a very secure manner by keeping your data also very secure. The mobile does, but I don’t know, and many of the people with disability do not know. Especially, if I don’t know, which I’m a source of enabler in some of the community, then it’s a pity that I don’t know. Can there be a whole, what you call the whole writer, the checklist, and everything that these are the things that we must apply? There are all the able things that, let’s say, digital access, a public center should be at a village level. It should have a ramp, but it should have 100 other things, which is not done. For example, none of our websites are voice-enabled, none of the websites are hearing-enabled or visually-enabled. That is the first thing that we should do. To do that, you just have to make it mobile-enabled. If you do mobile-enabled, then automatically your content can also be read out and everything, rather than doing on a typical web or a traditional web version. So I’m saying that in. last sentence is that, you know, we must do the data collection and data solution from the very high level contextualization and making cross-pollination of the facilitation of all the ability-oriented things which can enable rather than only focusing on disability.

Maitreya Shah: Thank you so much. I think I’d like to add something. I think I’d like to differ with my friend a little bit here because I think, you know, the problem with the entire data collection effort is not the focus on disability or ability. In fact, I feel that the very distinction between ability and disability, I think, is a very medicalized conception of disability, you know, because we only focus on particular organ senses or impairments. And I think we have come a long way after a lot of battle and advocacy to kind of do away with that in the society and especially with governments who have increasingly been relying on, you know, impairment-based metrics to collect data about people or to categorize people with disabilities. And I think in India, this is specifically evident, you know, the 2016 Act only recognizes 21 categories of disabilities, which is a very narrow way of defining disability. You know, if you see the UN Convention on Rights of Persons with Disabilities, it talks about a very broader approach. It talks about, you know, social barriers. It talks about attitudinal barriers. And disability is defined from a more of a social model perspective where you think about how society is, you know, posing barriers for people with disabilities versus, you know, what the ability or the disability of a particular individual is. And so I think that is why, you know, this data collection efforts have been failing because, you know, is that, you know, we, the official data says that people with disabilities are, you know, 2 to 5 percent of the Indian population. But according to the World Health Organization, the 15 percent of the world population lives with some of the other form of disability. If you ask me, I think this number is only going to be more in the Indian context because we also have high rates, high numbers of people who are poor, people who face other forms of marginalization, including lack of access to healthcare. So the number of people with disabilities is going to be more. But our data collection efforts have been failing because we don’t want to count those people. We don’t want to count all those who might be eligible to identify as a person with disability. So I think this disability distinction, I feel is very medicalized and I think we need to probably take the UNCRPD approach, take a more social model approach, and think about larger barriers, larger universal access issues when we think about disability and data collection. Thank you.

Fawaz Shaheen: All right. Thank you, Mithra. I think that’s well taken. I could see you snorting most of it. We can continue this engagement also if you want to.

Speaker 1: We can have this. I don’t have any disagreement to what you’re saying because maybe when you narrate anything, there can always be a different articulation of different things. So the people who deal with data, deal with laws and orders, I don’t even know how to do that. But I totally agree that, yes, there is a perception of looking at disability from the very medical perspective. And that is the reason why I mentioned that most of our activities are related to not looking at it, looking at it with several other abilities and take positive stance on that one to make it more work.

Fawaz Shaheen: Sure. I think this… is an interesting conversation. We are down to the last 20 minutes so we’ll have one more question is to Maitreya and to Eleni and then we have some time for taking questions and observations from our participants. Before moving forward I would just like to remind all the participants who just joined us that we have a document around suggested code for best practices for an inclusive internet. This is a collaborative document. We request you before leaving please get access to the document from my colleague Nidhi who’s standing here and we will continue to take suggestions till the end of the day and we’ll try to get some of these suggestions included in the IGF call to action. So now without further ado Maitreya I would like to take the next question to you especially because you’ve been working on or not. Fairness in AI is a conversation that needs to include persons with disabilities much more and in your last answer actually you were talking about how most of the automated accessibility plugins that we see are inherently deceptive. Some of that we know is also because of as we said the corruption of the data set itself. The fact that it doesn’t account for the diversity that is there among persons with disabilities. But I guess my question to you right now is given this context how do you think we can begin to ask for accountability from automated decision-making systems especially from the perspective of persons with disabilities?

Maitreya Shah: Thank you so much. I think that’s a great question and I think I’d like to break down my answer into sort of two you know I think I’ll break it based on the two kind of broader technologies that I see in the market. One is quote-unquote technologies that are specifically designed for people with disabilities and the other is mainstream technologies that people with disabilities also use. as users. So the first category where we think about automated and AI based technologies such as assistive technologies particularly, that are designed particularly for people with disabilities. I think when we when we think about accountability here, I think my my first argument is, you know, there is a lot of pseudoscience and a lot of these broader, you know, unchecked optimism with with AI powered assistive technologies. You know, I think everyone these days I feel is coming out with an AI powered assistive technologies in the market without understanding what their implications would be on people with disabilities. To give you a very small example, especially in the care space, you know, for people with disabilities who require caregiving support people such as you know, muscular dystrophy or other cerebral palsy or other disabilities who might require based on the severity of disability might require caregiving support. A lot of AI powered smart robots are being adopted across the world saying that these technologies would be transformative for people with disabilities. Not realizing, you know, that these technologies might violate privacy of people with disabilities. You know, whether do you know, you do need these technologies at all? Like, why would you need to replace humans with a robot when it comes to disability caregiving? There are many other issues, you know, there are assistive technologies that claim to fix certain disabilities like autism. And I think that is something that is very deeply problematic, because, you know, disability is not some, you know, not not not an ailment or disease that needs to be fixed or an abnormality that needs to be fixed. And there are many AI powered technologies currently in the market that claim to quote unquote, cure So I think for me, the faintest conversation starts here to ask if we need a certain technology for people with disabilities at all. And if we need them, you know, who decides what is good for people with disabilities? Should it be a certain corporation sitting somewhere in the US run by non-disabled people thinking if a technology would be good or bad for a person with disability? Or should it be a person with disability or a group of people with disabilities thinking about technologies themselves? The second broader point is around mainstream technologies. And I think there is so much AI and so much automation around us. And I think that’s where people with disabilities are very marginalized in the conversation today. A lot of research that has happened has focused on racial minorities or for gender. But there is very little research that has focused on how AI technologies impact people with disabilities or how fairness metrics can cater to disability. A lot of my recent research at Harvard has been on how AI fairness metrics and AI governance policies both explicitly exclude disability from their ambit. So, to give you an example, a lot of LLMs these days like chatGPT are trained to not discriminate against people of color or people belonging to racial minorities. But they’re not trained on, you know, avoiding discrimination against people with disabilities. And in my work, I’ve illustrated how discrimination actually manifests with chatGPT and other generative AI tools like Gemini for people with disabilities. And I think there are several issues with this. There are issues with… with the data because you know this is a long-standing issue. We don’t have enough data on people with disabilities. The data that we have is often very biased because there has been so much societal stigma against people with disabilities and I think almost every jurisdiction or every country. So there is this problem of data. Then the other problem is that you know people with disabilities are usually never considered, they are never thought about when a technology is designed or deployed. So you know people with disabilities are usually never on the table. The workforce participation of people with disabilities especially the technology sector is very less. So the representation is an issue. And the third is when you think about governance policies, they also do not adequately consider disabilities. So when you think about how a technology should be free of biases or when they pose risks and so on, disability is not considered. And that’s something that you can see in other jurisdictions. So the European AI Act for example does not consider how a biometric technology might impact people with disabilities and they put biometric technologies in the lower threat. Whereas you know to me and to a lot of other researchers, biometric technologies pose significant risk for people with disabilities. So I think you know there are many issues with the larger AI fairness conversations, the larger AI bias conversations when it comes to disability. And I think to kind of end with, I think I like to put this in like the broad themes of one, lack of representation of people with disabilities in these conversations. Two, a lot of false optimism or a lot of hype around AI-powered assistive technology without understanding if they are even effective. of what they are seeking to do, what is the intent of the companies that are manufacturing such technologies. And I think the third is this larger marginalization of disability from different technological stages or the different stages of a technology life cycle, right from design, then to deployment, and then to governance. So I think this is broadly my understanding of the larger landscape of automated systems right now.

Fawaz Shaheen: Thank you, Mathita, that’s a very detailed, very, and I know you was trying to contain but into a very little. So thank you so much for doing that. We have 10 minutes now, and very quickly, Eleni, if we could go to you in three minutes or four minutes, if you could just, you know, you’ve had, UNESCO has done so much work on open distance, open learning, distance learning. And in your, from your own experience, if you could talk a little bit about the challenges of automated decision-making, especially in a field like education, and some either guidelines, ethical, you know, safeguards, principles that we should keep in mind when approaching AI systems for persons with, that’s it, for that impact persons with disabilities. And of course, when I say AI systems, I mean AI systems that are absolutely necessary, as Maitreya would say, not AI, not just any systems because we want to make them, but AI systems that are necessary or that are going to impact persons with disabilities. Eleni, if you could just respond to that, three or four minutes, yeah.

Eleni Boursinou: Thank you, thank you Fahad. So the work UNESCO has been doing in education is about, I mean, we see that there are really a lot of challenges by learners with disabilities in online education. So the accessibility gaps in content and in platforms have to be. recognized. There are tools that remain inaccessible, often incompatible with assistive technologies that leave learners with disabilities excluded. So these barriers are also accentuated by digital skill gaps and limited access to resources. We have a lack of inclusive pedagogies and capacity building and training for educators in universal design for learning that further exacerbates these issues. So automated decision making systems in education pose even more risks for persons with disabilities because biases in data and algorithms often result in discriminatory outcomes such as biased admissions, decisions, assessments or resource allocation. Transparency and accountability remain significant concerns and ADM systems often make unjust decisions and they fail to consider the diverse needs of persons with disabilities and what we are trying to do by addressing all the challenges is to work with member states on guidelines and implementation policies to include ethical AI development in education and also guidelines to include policies for not only persons with disabilities. What Mr Shah said in his comment that now the definition has to include all marginalized communities and vulnerable groups, even autism and people with dyslexia. and learning difficulties like this. So, what we think is… I’m going to share in the chat all the links on some documents that UNESCO has been working on. And I also want to end by a field example that we had with the Rwanda Education Board. We worked with the Rwanda Education Board and the Light for the World NGO to foster digital skills development with teachers that have disabilities. And we really saw that it is very, very important to have teachers with disabilities be represented within the communities and the whole education ecosystem. And we had a very, very interesting case study based on that. And they provided constructive feedback on the guidelines. And we actually enriched the guidelines for the governments based on their feedback. So, that’s all for me. I will share all the links in the chat.

Fawaz Shaheen: Thank you, Leni. And we have some observations from participants. I think we can… I’ll… Yeah. We’ll just get the mic to you, sir. Thank you.

Audience: To introduce myself, I am Dr. Mohammad Shabbir from Pakistan and I am the coordinator of Internet Governance Forum’s Dynamic Coalition on Accessibility and Disability. So, I am really impressed by the discussion. So, I am sorry that I joined a little bit. late due to another session that I was speaking at. So as the coordinator of Dynamic Coalition on Accessibility and Disability, I would just want to flag that we have accessibility guidelines. Dynamic Coalition on Accessibility and Disability is one of the old setups within the IGF system, and it has been advising IGF on organizing accessible meetings for people with disabilities. So we have revised guidelines out there on the DC booths, which if in-person participants can go and see there, get them from there, and if someone needs a braille copy, that is also available there. Secondly, I would agree with the speaker from Harvard. I think it was Metra. Pardon if I’m pronouncing the name wrong. There are deceptive practices within the data systems, particularly when we talk about AI and algorithm-based systems. Many of the things have already been said, but one concern that I have as a person with disability, we all use a number of AI-based systems, and we all know that data as the oil of AI is being used to train these systems. As a person with disabilities, sometimes we use these applications systems for many of our personal documents, personal performances, some work, et cetera. So there is a lot of private data needed to these applications and systems of persons with disabilities. Same is the case for people who are hard of hearing, those who use these technologies for interpretation and translation, be in sign language interpretation or otherwise in their conversations. So a lot of data is going into these applications. We don’t know because there are privacy policies, but they are, to my understanding, except for Be My Eyes or some other applications, they have not, many of the applications have not changed their privacy policies. So these also need to be looked at from a friendly perspective. One last point that, and it does not need a response, but I want to leave this to you as an afterthought. There is a disability definition by the CRPD which states people have impairments and disability occurs when those impairment interact with the societal barriers. If those barriers are removed, disabilities are removed. But broadening this definition, and this includes people with learning disabilities, autism and stuff, but my friends from India would understand there is a lot of efforts of abuse of these definitions of the disabilities and getting advantages on behalf of persons with disabilities posing as person with disabilities. So on one hand, while I agree that we need to encompass all disabilities into the definitions, but we need to have certain policies where impersonators could be kept out of those kind of facilities where people with disabilities are facilitated, but some impersonators come and get advantages in the name of persons with disabilities. Thank you so much.

Fawaz Shaheen: Thank you so much, Dr. Saab. I know that there is more scope for conversation right now, but unfortunately we are very much out of time. Thank you so much for joining us, even if it was for a little bit. And we are still here. If anyone wants to chat, have a conversation, the document. is still up, you can comment on it, leave your suggestions. And I would like to particularly thank our on-site speaker, Usama Manzar, for joining us, for sharing your learnings, your experiences. I’d like to thank Maitreya for joining us from such a different time zone and bringing your insights into this. I’m sure we have a lot to learn from you. And I encourage everyone, actually, to check out Maitreya Shah online. Check out his page on the UC Berkeley website, some very interesting articles. I was recently reading one on how the AI conversation needs to include voices with disabilities. Do go and check those out. Also, thank you so much, Eleni, for joining us, for sharing your very valuable insights. And thank you, everyone, for taking out the time. We’ll keep this conversation going. See you around. Thank you. Thanks, everyone. Thank you, everyone. Bye. Bye. Bye. Bye. Bye. Bye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

T

Tithi Neogi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for accessible consent mechanisms and digital services

Explanation

Digital accessibility is crucial for persons with disabilities to give meaningful consent online. Current data protection laws do not mandate accessible consent mechanisms for persons with disabilities.

Evidence

The Indian data protection law allows guardians to give consent on behalf of persons with disabilities, reducing incentives to make consent mechanisms accessible.

Major Discussion Point

Digital Accessibility and Inclusion for Persons with Disabilities

Agreed with

Osama Manzar

Eleni Boursinou

Maitreya Shah

Agreed on

Need for digital accessibility and inclusion for persons with disabilities

S

Osama Manzar

Speech speed

160 words per minute

Speech length

2804 words

Speech time

1050 seconds

Importance of involving persons with disabilities in technology development and service provision

Explanation

Persons with disabilities should be involved in providing digital services rather than being treated as subjects. This empowers them and changes the narrative around disability.

Evidence

Example of setting up digital centers run by persons with disabilities in villages, providing services to the community.

Major Discussion Point

Digital Accessibility and Inclusion for Persons with Disabilities

Agreed with

Tithi Neogi

Eleni Boursinou

Maitreya Shah

Agreed on

Need for digital accessibility and inclusion for persons with disabilities

Differed with

Maitreya Shah

Differed on

Approach to disability data collection

E

Eleni Boursinou

Speech speed

104 words per minute

Speech length

881 words

Speech time

505 seconds

Challenges in online education accessibility for learners with disabilities

Explanation

Learners with disabilities face significant barriers in online education due to inaccessible content and platforms. There is a lack of inclusive pedagogies and capacity building for educators in universal design for learning.

Evidence

UNESCO’s work with the Rwanda Education Board and Light for the World NGO to foster digital skills development for teachers with disabilities.

Major Discussion Point

Digital Accessibility and Inclusion for Persons with Disabilities

Agreed with

Tithi Neogi

Osama Manzar

Maitreya Shah

Agreed on

Need for digital accessibility and inclusion for persons with disabilities

M

Maitreya Shah

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of representation of persons with disabilities in AI and technology conversations

Explanation

People with disabilities are often not considered when technologies are designed or deployed. There is low workforce participation of people with disabilities in the technology sector.

Evidence

Example of how AI fairness metrics and governance policies often explicitly exclude disability from their ambit.

Major Discussion Point

Digital Accessibility and Inclusion for Persons with Disabilities

Agreed with

Tithi Neogi

Osama Manzar

Eleni Boursinou

Agreed on

Need for digital accessibility and inclusion for persons with disabilities

Privacy risks from AI-powered assistive technologies and data collection

Explanation

There is unchecked optimism about AI-powered assistive technologies without understanding their implications on people with disabilities. These technologies may violate privacy and promote problematic ideas about ‘fixing’ disabilities.

Evidence

Example of AI-powered smart robots being adopted for caregiving without considering privacy implications.

Major Discussion Point

Data Protection and Privacy for Persons with Disabilities

Agreed with

Angelina Dash

Audience

Agreed on

Concerns about data protection and privacy for persons with disabilities

Risks of bias and discrimination against persons with disabilities in AI systems

Explanation

AI systems and large language models are often not trained to avoid discrimination against people with disabilities. This leads to biased outcomes in various applications of AI.

Evidence

Research showing how discrimination manifests in chatGPT and other generative AI tools for people with disabilities.

Major Discussion Point

AI and Automated Decision-Making Systems

Importance of disability representation in AI fairness and governance conversations

Explanation

People with disabilities are marginalized in conversations about AI fairness and governance. Disability is often not considered in policies about technology risks and biases.

Evidence

Example of the European AI Act not considering how biometric technologies might impact people with disabilities.

Major Discussion Point

AI and Automated Decision-Making Systems

Limitations of medical/impairment-based approaches to disability data collection

Explanation

Current data collection efforts often fail because they rely on a narrow, medicalized conception of disability. This approach excludes many people who might identify as disabled under a broader definition.

Evidence

Contrast between official Indian data showing 2-5% of the population as disabled, versus WHO estimates of 15% globally.

Major Discussion Point

Disability Data and Definitions

Differed with

Speaker 1

Differed on

Approach to disability data collection

Need for social model and broader definitions of disability aligned with UNCRPD

Explanation

A social model approach to disability, focusing on societal barriers rather than individual impairments, is needed. This aligns with the UN Convention on Rights of Persons with Disabilities and provides a more inclusive framework.

Major Discussion Point

Disability Data and Definitions

A

Angelina Dash

Speech speed

150 words per minute

Speech length

820 words

Speech time

326 seconds

Issues with treating persons with disabilities like children in data protection laws

Explanation

The Indian data protection law treats persons with disabilities similarly to children, requiring consent from guardians. This approach infantilizes persons with disabilities and doesn’t account for their autonomy.

Evidence

Comparison of treatment of children and persons with disabilities in Indian data protection law.

Major Discussion Point

Data Protection and Privacy for Persons with Disabilities

Agreed with

Maitreya Shah

Audience

Agreed on

Concerns about data protection and privacy for persons with disabilities

Need for sensitive personal data category in data protection laws

Explanation

The absence of a sensitive personal data category in Indian data protection law is problematic. Certain data of persons with disabilities can be more vulnerable and susceptible to misuse for discrimination.

Evidence

Examples of health and financial data of persons with disabilities being more vulnerable to misuse.

Major Discussion Point

Data Protection and Privacy for Persons with Disabilities

Agreed with

Maitreya Shah

Audience

Agreed on

Concerns about data protection and privacy for persons with disabilities

A

Audience

Speech speed

124 words per minute

Speech length

620 words

Speech time

298 seconds

Concerns about private data of persons with disabilities being used to train AI systems

Explanation

Persons with disabilities often use AI-based systems for personal tasks, inputting private data. There are concerns about how this data is being used to train AI systems and whether privacy policies adequately protect this sensitive information.

Evidence

Examples of applications used by people who are hard of hearing for interpretation and translation.

Major Discussion Point

Data Protection and Privacy for Persons with Disabilities

Agreed with

Angelina Dash

Maitreya Shah

Agreed on

Concerns about data protection and privacy for persons with disabilities

Challenges in balancing inclusive definitions with preventing misuse/impersonation

Explanation

While broader definitions of disability are needed, there are concerns about potential abuse and impersonation. Policies are needed to prevent impersonators from taking advantage of facilities meant for persons with disabilities.

Major Discussion Point

Disability Data and Definitions

Agreements

Agreement Points

Need for digital accessibility and inclusion for persons with disabilities

Tithi Neogi

Osama Manzar

Eleni Boursinou

Maitreya Shah

Need for accessible consent mechanisms and digital services

Importance of involving persons with disabilities in technology development and service provision

Challenges in online education accessibility for learners with disabilities

Lack of representation of persons with disabilities in AI and technology conversations

All speakers emphasized the importance of making digital technologies and services accessible and inclusive for persons with disabilities, highlighting various aspects such as consent mechanisms, education, and technology development.

Concerns about data protection and privacy for persons with disabilities

Angelina Dash

Maitreya Shah

Audience

Issues with treating persons with disabilities like children in data protection laws

Need for sensitive personal data category in data protection laws

Privacy risks from AI-powered assistive technologies and data collection

Concerns about private data of persons with disabilities being used to train AI systems

Multiple speakers raised concerns about the inadequate protection of personal data of persons with disabilities, highlighting issues in current laws and risks associated with AI technologies.

Similar Viewpoints

Both speakers advocate for broader, more inclusive definitions of disability that go beyond medical models, while also acknowledging the need to prevent misuse of such definitions.

Maitreya Shah

Audience

Limitations of medical/impairment-based approaches to disability data collection

Need for social model and broader definitions of disability aligned with UNCRPD

Challenges in balancing inclusive definitions with preventing misuse/impersonation

Unexpected Consensus

Importance of involving persons with disabilities in technology development and service provision

Osama Manzar

Maitreya Shah

Importance of involving persons with disabilities in technology development and service provision

Lack of representation of persons with disabilities in AI and technology conversations

Despite coming from different backgrounds (grassroots implementation vs. academic research), both speakers strongly emphasized the need for direct involvement of persons with disabilities in technology development and deployment.

Overall Assessment

Summary

The speakers generally agreed on the need for greater digital accessibility, inclusion, and data protection for persons with disabilities. They also emphasized the importance of involving persons with disabilities in technology development and policy-making processes.

Consensus level

There was a high level of consensus on the main issues, with speakers complementing each other’s perspectives from different angles (policy, grassroots implementation, research). This consensus suggests a strong foundation for developing more inclusive policies and practices in digital accessibility and data protection for persons with disabilities.

Differences

Different Viewpoints

Approach to disability data collection

Osama Manzar

Maitreya Shah

Importance of involving persons with disabilities in technology development and service provision

Limitations of medical/impairment-based approaches to disability data collection

Osama Manzar emphasizes focusing on abilities and involving persons with disabilities in service provision, while Maitreya Shah argues for moving away from medicalized conceptions of disability towards a social model approach in data collection.

Unexpected Differences

Optimism about AI-powered assistive technologies

Osama Manzar

Maitreya Shah

Importance of involving persons with disabilities in technology development and service provision

Privacy risks from AI-powered assistive technologies and data collection

While Osama Manzar appears optimistic about the potential of digital technologies to empower persons with disabilities, Maitreya Shah unexpectedly raises concerns about unchecked optimism regarding AI-powered assistive technologies, highlighting potential privacy risks and problematic assumptions about ‘fixing’ disabilities.

Overall Assessment

summary

The main areas of disagreement revolve around approaches to disability data collection, the role of AI-powered assistive technologies, and the extent of representation needed for persons with disabilities in technology development and policy discussions.

difference_level

The level of disagreement is moderate. While speakers generally agree on the importance of inclusion and representation for persons with disabilities, they differ in their specific approaches and areas of concern. These differences highlight the complexity of addressing disability issues in the context of digital technologies and data protection, suggesting the need for continued dialogue and diverse perspectives in policy-making.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of involving persons with disabilities in technology development and conversations. However, they differ in their approach, with Osama Manzar focusing on practical involvement in service provision, while Maitreya Shah emphasizes representation in broader AI and technology policy discussions.

Osama Manzar

Maitreya Shah

Importance of involving persons with disabilities in technology development and service provision

Lack of representation of persons with disabilities in AI and technology conversations

Similar Viewpoints

Both speakers advocate for broader, more inclusive definitions of disability that go beyond medical models, while also acknowledging the need to prevent misuse of such definitions.

Maitreya Shah

Audience

Limitations of medical/impairment-based approaches to disability data collection

Need for social model and broader definitions of disability aligned with UNCRPD

Challenges in balancing inclusive definitions with preventing misuse/impersonation

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

We must do conversation, action, intervention with people with disability in everything, in everything, whether you are doing research or data collection or doing something on work, they must be part of the ecosystem.

speaker

Osama Manzar

reason

This comment emphasizes the critical importance of including people with disabilities in all aspects of research, policy-making, and implementation related to disability issues. It challenges the common practice of making decisions for people with disabilities without their input.

impact

This comment shifted the discussion towards the importance of representation and inclusion of people with disabilities in decision-making processes. It led to further exploration of how to meaningfully involve people with disabilities in various contexts.

The problem with the entire data collection effort is not the focus on disability or ability. In fact, I feel that the very distinction between ability and disability, I think, is a very medicalized conception of disability.

speaker

Maitreya Shah

reason

This comment challenges the traditional medical model of disability and introduces the social model perspective. It highlights how current data collection methods may be fundamentally flawed due to their underlying assumptions about disability.

impact

This comment deepened the discussion by introducing a more nuanced understanding of disability. It led to a conversation about the limitations of current data collection methods and the need for a more holistic approach to understanding disability.

A lot of my recent research at Harvard has been on how AI fairness metrics and AI governance policies both explicitly exclude disability from their ambit.

speaker

Maitreya Shah

reason

This comment highlights a significant gap in current AI ethics and governance frameworks, pointing out how disability is often overlooked in discussions of AI fairness.

impact

This comment shifted the discussion towards the intersection of disability and AI, leading to a more in-depth exploration of the challenges and risks posed by AI systems for people with disabilities.

We worked with the Rwanda Education Board and the Light for the World NGO to foster digital skills development with teachers that have disabilities. And we really saw that it is very, very important to have teachers with disabilities be represented within the communities and the whole education ecosystem.

speaker

Eleni Boursinou

reason

This comment provides a concrete example of how including people with disabilities in educational initiatives can lead to more effective and inclusive outcomes.

impact

This comment grounded the discussion in practical examples, demonstrating the real-world impact of inclusive practices. It led to a discussion of best practices and successful case studies in disability inclusion.

Overall Assessment

These key comments shaped the discussion by shifting it from a theoretical understanding of disability issues to a more nuanced, practical, and inclusive approach. They challenged traditional perspectives on disability, highlighted the importance of representation and inclusion, and brought attention to emerging challenges in areas like AI and data governance. The discussion evolved from focusing on barriers and problems to exploring solutions and best practices for meaningful inclusion of people with disabilities in various contexts.

Follow-up Questions

How can we make consent mechanisms more accessible for persons with disabilities?

speaker

Tithi Neogi

explanation

This is important to ensure persons with disabilities can give meaningful consent online and access digital services independently.

Should sensitive personal data be reintroduced as a category in India’s data protection law?

speaker

Angelina Dash

explanation

This is crucial for providing additional safeguards for vulnerable data of persons with disabilities, which could be misused for discrimination.

How can we address the lack of a global south perspective in disability and internet governance discourse?

speaker

Angelina Dash

explanation

This is important to account for unique challenges faced by persons with disabilities in global south countries, including intersecting marginalization.

What is the best approach for engaging persons with disabilities in policy development processes?

speaker

Audience member from Ghana

explanation

This is crucial for ensuring policies are inclusive and address the actual needs of persons with disabilities.

How can we improve data collection efforts to better represent the diversity among persons with disabilities?

speaker

Fawaz Shaheen

explanation

This is important for developing more accurate and inclusive policies and interventions for persons with disabilities.

How can we ensure accountability from automated decision-making systems, particularly from the perspective of persons with disabilities?

speaker

Fawaz Shaheen

explanation

This is crucial to prevent discrimination and ensure fairness in AI systems that impact persons with disabilities.

How can we address the privacy concerns related to AI-based assistive technologies collecting personal data from persons with disabilities?

speaker

Dr. Mohammad Shabbir

explanation

This is important to protect the privacy and data rights of persons with disabilities who rely on these technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #235 Judges on Human Rights Online

WS #235 Judges on Human Rights Online

Session at a Glance

Summary

This discussion focused on the challenges and opportunities of integrating digital technologies and artificial intelligence into judicial systems. The panel, which included judges, lawyers, and technology experts, emphasized the importance of engaging the judiciary in internet governance forums to address emerging digital rights issues.


Key points included the need for judges to adapt to rapidly evolving technologies, with examples given of how AI is already being used in courts for tasks like scheduling, case assignment, and legal research. Panelists stressed the importance of developing comprehensive legal frameworks to safeguard digital rights, privacy, and transparency in the digital age. They also highlighted challenges such as cross-border jurisdiction issues in cybercrime cases and the need for better training and resources for judges in developing countries.


The discussion touched on the potential benefits of AI in improving judicial efficiency, while also cautioning about risks like AI hallucinations and the need for human oversight. Panelists agreed on the importance of including marginalized groups in digital justice initiatives and called for more inclusive policies.


The session concluded with calls for greater collaboration between the judiciary, technology experts, and policymakers. Participants emphasized the need for ongoing education and capacity building for judges and lawyers to keep pace with technological advancements. Overall, the discussion underscored the critical role of the judiciary in shaping the future of digital rights and internet governance.


Keypoints

Major discussion points:


– The importance of including judges and the judiciary in Internet governance discussions and forums


– Challenges judges face in handling digital evidence and cybercrime cases


– The use of AI and technology in judicial processes and decision-making


– The need for legal frameworks and capacity building to address digital rights issues


– Ensuring access to digital justice for marginalized groups


Overall purpose/goal:


The main goal of this discussion was to explore ways to engage and empower judges to address digital rights issues and adapt legal systems to the challenges of the digital age. The session aimed to highlight the importance of judicial involvement in Internet governance.


Tone:


The overall tone was informative and collaborative. Speakers shared insights from their experiences in different countries and contexts. There was a sense of enthusiasm about bringing judges into Internet governance discussions for the first time. The tone became more urgent when discussing challenges, but remained optimistic about finding solutions through cooperation and capacity building.


Speakers

– Nazarius Kirama: Moderator, Tanzania Internet Governance Forum


– Umar Khan Utmanzai: Advocate, practicing law at Peshawar High Court, Pakistan


– Martin Koyabe: Cybersecurity expert, Global Forum on Cyber Expertise


– Eliamani Isaya Laltaika: Judge, High Court of Tanzania


– Rachel Magege: Lawyer specializing in data protection and governance, Tanzania


Additional speakers:


– AUDIENCE: Various audience members who asked questions


Full session report

Revised Summary of Judicial Engagement in Internet Governance Discussion


This comprehensive discussion focused on the challenges and opportunities of integrating digital technologies and artificial intelligence (AI) into judicial systems. The panel, comprising judges, lawyers, and technology experts, emphasised the critical importance of engaging the judiciary in internet governance forums to address emerging digital rights issues.


Key Themes and Discussion Points:


1. Importance of Judiciary Engagement in Internet Governance


The panellists unanimously agreed on the necessity of including judges and the judiciary in Internet governance discussions and forums. Judge Eliamani Isaya Laltaika stressed that judges need to understand digital issues to properly adjudicate cases, while moderator Nazarius Kirama pointed out that the judiciary has been notably absent from Internet Governance Forum discussions. Advocate Umar Khan Utmanzai highlighted the need for legal frameworks to be updated to address digital rights effectively.


There was a strong consensus that judges should embrace AI and other technologies to improve court processes. However, this point also revealed some differences in approach. While Judge Laltaika advocated for enthusiastically embracing AI, Utmanzai cautioned about the challenges judges face due to lack of training and understanding of technology.


2. Challenges in Applying Law to Digital Spaces


The discussion highlighted several key challenges that judges and legal systems face in the digital age:


a) Complexity of Digital Evidence: Martin Koyabe, a cybersecurity expert, emphasised that digital evidence is complex and requires new skills from judges to interpret and use effectively in court proceedings.


b) Cross-border Jurisdiction: Utmanzai pointed out that the cross-border nature of the internet creates significant jurisdictional issues for courts, particularly in cybercrime cases. He elaborated on the difficulties judges face in determining jurisdiction, collecting evidence, and enforcing judgments across borders.


c) AI Hallucinations: Judge Laltaika raised concerns about AI systems potentially “hallucinating” and presenting inaccurate information, which could have serious implications in legal proceedings.


d) Lack of Precedent: Utmanzai noted that the lack of precedent in cyber cases creates difficulties for judges in making consistent and informed decisions.


3. Strategies for Improving Digital Rights Protection


The panel and audience members proposed several strategies to address these challenges:


a) Comprehensive Legal Frameworks: There was a call for the development of comprehensive legal frameworks specifically designed to address digital rights issues. Martin Koyabe emphasised the need for robust digital frameworks in countries.


b) Cross-border Collaboration: Audience members suggested strengthening cross-border judicial collaboration to tackle jurisdictional challenges in cyber cases.


c) Judicial Training: Utmanzai emphasised the need for increased judicial training on technology issues to bridge the knowledge gap. This includes incorporating cyber law into law school curricula.


d) Showcasing AI Benefits: Rachel Magege, a lawyer specialising in data protection, suggested demonstrating the benefits of AI to increase its acceptance within the legal community.


e) Judiciary Global School on Internet Governance: Judge Laltaika announced the launch of this new initiative to train judges on internet governance issues.


4. AI Integration in Judicial Systems


Judge Laltaika shared insights on the use of AI in Tanzanian courts for various purposes:


– Scheduling court sessions


– Case assignment to judges


– Language translation


– Legal research assistance


He also mentioned an upcoming session on AI ethics for judges, highlighting the proactive approach to addressing AI-related challenges in the judiciary.


5. Inclusivity in Digital Rights


The discussion touched on the importance of ensuring inclusivity in digital rights:


a) Nazarius Kirama stressed the need for policies to prevent digital exclusion of marginalised groups.


b) Rachel Magege highlighted how gender-based violence can be exacerbated online and how the digital divide affects access to justice.


c) Martin Koyabe emphasised that frameworks should embed human rights protections to ensure inclusivity.


Thought-Provoking Insights:


1. Judge Laltaika shared an anecdote about a high court judge in East Africa who was summoned by a disciplinary committee for allegedly using ChatGPT in writing part of a judgment, highlighting real-world challenges of AI use in judicial processes.


2. Judge Laltaika provided a positive example from Tanzania, where a strategic five-year plan was developed to identify judiciary needs and secure executive support for technological advancements.


3. Martin Koyabe praised Tanzania’s approach to developing its cybersecurity strategy, which embedded fundamental tools and instruments within the strategy.


4. Umar Khan Utmanzai described the situation in Pakistan, where many high court judges lack basic knowledge about the internet and AI due to their isolation from public interaction and outdated legal education.


5. Judge Laltaika used a metaphor of rebuilding a house to illustrate the need for adapting the judiciary to accommodate the digital world.


Resolutions and Action Items:


1. Launch of the Judiciary Global School on Internet Governance to train judges on IG issues.


2. Plan to include more judges and legal practitioners in future IGF meetings, including a parliamentary track room session on this topic.


3. A Tanzanian judge’s commitment to request government sponsorship for lawyers to attend the next IGF.


4. Tanzania Internet Governance Forum’s initiatives to engage judges in IG discussions.


5. Recognition of UNESCO’s guidelines for AI use by judiciaries.


Unresolved Issues and Future Considerations:


1. Balancing judicial independence with the need for technology adoption.


2. The extent to which court processes should be digitised (e.g., online marriages).


3. Addressing AI hallucination and ensuring the accuracy of AI-generated legal information.


4. Funding and resource allocation for judiciary digitisation in developing countries.


In conclusion, this discussion underscored the critical role of the judiciary in shaping the future of digital rights and internet governance. It highlighted the urgent need for judges to adapt to rapidly evolving technologies while emphasising the importance of developing comprehensive legal frameworks to safeguard digital rights, privacy, and transparency in the digital age. The session called for greater collaboration between the judiciary, technology experts, and policymakers, emphasising the need for ongoing education and capacity building for judges and lawyers to keep pace with technological advancements.


Session Transcript

Nazarius Kirama: We are on Channel 5 for the session. So we wait for like two minutes before we start. But Dr. Nazare Sukirama, I’ll be moderating the session and my fellow Daniel Turan will be moderating online, and Atanas will be our reporter, and I would like to take this opportunity to welcome all of you to this very important session, and this session is happening for the first time during the lifetime of the Internet Governance Forum. It is the first time that we’re going to have this, and we hope that next year will be bigger and the years after that. If you can put on the presentation, please. Just a moment. So like I said at the beginning, my name is Nazare Nikola Sukirama, a.k.a. Nazare Nikola, so that is my digital identity name, and today I’m going to be your pilot, and I have my co-pilot, Mr. Turan from Italy, and today we’re going to have a session on challenges on human rights online, and the overview of the session at this time when we live in the age of artificial intelligence, the digital age actually presents challenges that requires safeguards on human rights online, and when you are talking about things like privacy, freedom, inclusions are some key issues that need to be taken into account, so our session will focus on actually ways in which we can empower judges to address digital rights, and also we will explore some legal frameworks for inclusion. Like you see at the beginning, I said the judiciary, since the formation of the Internet Governance Forum in 2005, the judiciary has not been engaged in this space, if it is because of that notion of independence of the judiciary, but the judiciary as one of the four branches of government needs to be included in this space so they can not only learn, but also engage properly in the debates for how the Internet should be governed. Our attempt as Tanzania Internet Governance Forum is to engage judges, and we have had like two initiatives that are aiming for that, so you can see in the IGF multi-stakeholder model we are bringing the judiciary to be part of. Can you hear me now? Okay. The aim of this session is to make sure that there are as many stakeholders for collaboration and to protect digital rights online, and the objective is to adapt legal systems to safeguard privacy and freedom of expression, address cross-border enforcement challenges, foster inclusive policy for marginalized communities, and build judicial capacity in Internet Governance. So these are the kind of objectives that we are going to attempt to address through our esteemed panels. And the expected outcomes we believe the speakers will be able to address the legal systems better and also engage in terms of privacy and freedom of expression. Judicial collaboration is one of the expected outcomes in terms of roadmap for cross-border cooperation in enforcing digital rights consistently and fairly, inclusivity in digital policies that these are the kind of recommendations that we expect in terms of inclusive laws addressing marginalized and disabled communities to prevent digital exclusion. And the final expected outcome, identification of tools and strategies to enhance judicial engagement in Internet Governance. I would like to take this opportunity to thank all of our session makers, including the Honorable Dr. Judge Elia-Manuel Altayga for the input. Myself, Rachel Magege, who is joining us from online, Daniel Tura from Italy, who is our online moderator, Atanas Bazihire, who is on-site rapporteur, is around here, Dr. Martin Koyabe, who is our cybersecurity expert, sorry for the typo, and Pamela Chogo, who is a lecturer and a rapporteur online. We are also very glad today to introduce to you the new baby in town, the Judiciary Global School on Internet Governance, which has been registered by the Dynamic Coalition of School on Internet Governance. This basically is a platform for judges to learn about Internet Governance. governance, and it is an initiative of Tanzania IGF, ISOP Tanzania Chapter, and the Organization for Digital Africa. And we thank Dr. Eliamani, Honorable Judge from the High Court of Tanzania, for his enormous input and counsel on this initiative. So in the years ahead, we expect to continue to train judges from various jurisdictions around the world, and we look forward to engaging this critical branch of government in the IG space. We thank all of them for the opportunity that we have created to engage judges in this space. Now I would like to take this opportunity to introduce our speakers. The speakers will have like one minute to introduce themselves. You can say your name, I mean, your expertise, and the organization you come from, and then from there we will continue. Welcome to the session, ladies and gentlemen, and I look forward to your interaction. We’ll have a Q&A session, and we look forward to receiving your questions, and we hope and believe that we have in front of you our speakers who are very capable of answering and interacting with these questions. Thank you. We start with Umar from Pakistan, Advocate Umar from Pakistan.


Umar Khan Utmanzai: Hello. Thank you so much, moderator of this session. This is Advocate Umar, basically belong to Peshawar, Pakistan, practicing law at Peshawar High Court, a society organization with the name Citizen Rights and Advocacy Forum CROP, which basically targeted the awareness for the sensitization of the citizen rights, digital rights and privacy protection is one of our objectives, sensitizing the local community that accesses the internet, free and fair internet is your right, along with that, that how can we protect them, how can we sensitize them in terms of the cyberbullying, especially the women, which are very vulnerable in Pakistan. So this is what we are doing. Along with that, I’m taking these cyber cases in Pakistan, and so this is what we are doing in Pakistan. Thank you.


Nazarius Kirama: Dr. Qayyabi.


Martin Koyabe: Yeah, thank you very much, Naz, and first of all, let me take this opportunity on behalf of the Global Forum on Cyber Expertise, where I work and consult for, to really thank the organizers and judge for actually pushing this idea, I remember we talked about it in Kigali at some point, but I’m really glad to see it here. So my name is Dr. Martin Qayyabi, my real area of functionality is cyber security. I’ve worked in Africa extensively in a number of countries, specifically looking at strategy development, looking at, but more importantly, building the ICT sector within the continent. We’ll talk more within this session, and I’m really pleased to see some of you here today. Thank you.


Eliamani Isaya Laltaika: Thank you very much, facilitator. My name is Eliyamani Isayel Altaika. I’m a judge of the High Court of Tanzania and an adjunct faculty member of the Nelson Mandela African Institute of Science and Technology, where I taught for at least 10 years cyber security law, bioethics, and other law-related courses to scientists. And it’s true that I am, like we will discuss, I very much believe that without engaging judges, while building the digital economy, we will be shooting ourselves in the leg, because if you have very good policies, very good laws, and you bring a case before a judge, and that judge cannot even tell what a mouse is from another device, then you are just doing nothing. So I believe that judges are a fulcrum of any attempt to protect rights, be it digital or physical. And it is, therefore, my firm belief that in the next few years, we will have a lot of judges, and that is better for humanity. I have a dream.


Nazarius Kirama: Thank you so much, Judge, for always being there for us. I know we have had a lot to work on in terms of making the dream that Dr. Koyabe was talking about. In Kigali, it was just a dream, but now it is a reality. And we hope to continue to borrow your wisdom, engaging the judges into the intergovernance space. Now I will start with questions to our able speakers. I will start with you, Honorable Eliamani. Could you tell us why you have been advocating for inclusion of the judiciary in the IGF? I mean, why is it important?


Eliamani Isaya Laltaika: Yeah. Thank you very much. Hello? Yeah, I can hear you. Yeah, thank you very much. Like I said, this is really something very close to my heart, and it has a personal story. After my PhD, I was employed by the Nelson Mandela African Institution of Science and Technology. It is one in a network across the continent. Our elder Mandela had a dream of making Africa a hub of skills and knowledge and capabilities in STEM, science, technology, engineering, and mathematics. So he engaged with funders and donors from across the world, and through IMF and World Bank, they wrote a concept of establishing the MIT of Africa. They said, OK, we really cannot have one single MIT for Africa because of language barriers. There are countries which are Arabic, others are French, others are English, others are Portuguese. We also cannot have one MIT for the whole continent because education systems differ. At the end, they decided to establish four institutions, and I’m very glad that through diplomacy and efforts of the government of Tanzania by then, through President Jakai Amrisho Kikwete, who before that was in affairs, one of those prestigious institutions was established in my hometown of Arusha. As soon as I finished my PhD, I got into this institution, and I was tasked with preparing curriculum and course outlines to teach cyber security law, intellectual property, bioethics, and every other law that is required by scientists. It was during reading materials available in the cases that I discovered that there was a huge knowledge gap between judges or lawyers on one hand and scientists. From there, I became a link to try and bridge the gap. I conducted many seminars with the law society and also with the judges to try and alert them on fundamental issues related to ICT. I didn’t know that I would become a judge by then, but five years, six years later, I was appointed by the President to become a judge of the High Court in 2021. My friend, Nazar Kirama, doctor here, used to work with me for Tanzania IGF before he became a judge. One day, he came very humbly and tried to ask me whether after being a judge, I would still come to IGF. He was almost trembling, so I just gave him a hug and said, please sit down. It’s okay. I’m still the same. We traveled with him to Kigali, where Dr. Koyabe is staying. Throughout that trip, I was saying I need to have judges inclusive because I’m the only one who is getting this knowledge. Luckily, we traveled to Kyoto. During the closing ceremony of last year’s IGF, I was given a platform during the closing ceremony. After Prime Minister Suzuki, I took the stage and I made a joke. I said, there are about 9,000 people here, but I’m the only judge. What happened? I really pray that next year, we have more judges. I’m very glad that the IGF Secretariat took this very seriously. They wrote me an email back in Tanzania and said, we have started. Your colleague, Dr. Kirama Nazar, has brought a proposition. Today at 12, you will see now a session. established by IGF, which will compare the ethics of judges and use of artificial intelligence. Because some judges are afraid of coming here because they think artificial intelligence is eroding their ethical values. So we have a session today to answer, does use of AI make you a better judge or a worse judge? And we’ll have a very open discussion to make sure that judges all over the world, gain the confidence in coming with us. I will stop there for now because I have so many stories to tell.


Nazarius Kirama: Thank you so much, Judge Loltaika. And I think you’re one of the judges that I could dare say that you’re a judge down to earth. We have advocate Rachel Magege trying to join us online. And as soon as she joins us, we will have her introduce herself. And we continue with advocate Uma from Pakistan. Advocate Uma, I know you have been in the space of lawyering for several years. And do you think judges need to learn new tools of statutory interpretation to accommodate human rights in the digital space?


Umar Khan Utmanzai: The digital world is growing rapidly and not just law, every field is going to adopt it. So I believe that being in the field of legal paternity from the last three years, it is very mandatory for the judges as well to adopt the new tools just to face those to trial and deal with the cases related to cyber crime and other issues. Because in the traditional system of law, you cannot have such opportunities to trial cases related to head speech, freedom of expression. So there are issues like we have seen the IGF Secretariat has established the charter of human rights and principle for the internet, which has been delivered through the UDHR 1948, which is important. So I think those who rights we have been available to the citizens, to the humans under the UDHR now have been came through the charter of the human rights and principle for the internet. And so I believe that being a judge, you have those things which are not developing the way the internet has developed, the way the AI, no, no, just internet, no, the AI in the coming year, you will have more things in the era. So I believe that for the judge, it is very important to adopt himself with the tools which are going to have the citizen in the field of law. And for the judges who are going to deliver the justice, are the more important, it was one of the important stakeholder, the way that educate the judge high court, Tanzania has mentioned in a very brilliant way that why he has always in an interest to work on the digital rights. So I believe that in the coming year or in the nowadays, it is very important for the judges to adopt with the technology, with the new tools to interpret the laws. And I believe that the traditional, the old laws do not deal, do not answer and address the issues which are currently in the system of the digital world.


Nazarius Kirama: Thank you, Advocate Umar. And I think that is very important for judges to be able to keep abreast with the emerging technologies and the tools that they need to deliver justice on time. Because we know justice delayed is justice denied and technology tools have this capacity to be able to shorten the time with which the justice can be delivered. Now, we have Madam Advocate Rachel Magege. Can you hear us?


Rachel Magege: Yes, yes, good morning. I can hear you loud and clear. Can you hear me?


Nazarius Kirama: Yes, we can hear you Rachel. Thank you so much for joining us online. And if we can take one minute to introduce yourself and your institution. And if you can answer this question after that. The advocate, can gender-based violence be exasperated online? That will be your question after your introduction. Welcome, yes.


Rachel Magege: Thank you, thank you so much. And good morning to everyone. I am Rachel Magege. I am a Tanzanian citizen and I am a lawyer. My areas of practice are in data protection, data governance more generally. And I am very, very honored to be a part of this panel and a part of this conversation. I sit on the board of directors under the Tanzania Privacy Professionals Association. We have an association of privacy and data protection experts in the country. And so I’m really glad to be having this conversation. So Mr. Nazarius, on your question about gender-based violence and the digital and online platforms. Absolutely, gender-based violence can be further moved and amplified and can become even much greater because of all of these different digital and online platforms. It is very important for people to know that many times what happens in the physical is what is going to move and happen even in the online and digital spaces. Likewise, what happens in the online spaces with people who do not even know each other with regards to perpetuating harassment and bullying to specific genders also can move into the physical. So I’m usually very sensitive about this topic with the many clients and the many people that we talk to but even in the judiciary sector, because already if you’re looking at, for example, the physical space where with different backgrounds, different relationships and different structures like that, if it’s already happening in the physical and you have an institution that wants to introduce artificial intelligence or other different technologies, that same mindset is still going to happen even in the online and digital platforms. So it’s very important to make sure that in as much as people are working towards understanding and learning these emerging technologies, they also need to understand and learn more about themselves and about the biases that they carry, which can transfer even into the digital and online space. That’s a very brief response I can give for now. I’m happy to answer more questions as they come. Thank you.


Nazarius Kirama: Thank you very much, Advocate Rachel, for your… And indeed, I was tickled by what Honorable Judge said at the beginning that it is very important for the judiciary to be engaged in terms of all these issues because the cases about either internet or internet-related issues will always end up in courts. So, would always be the one that knows the issues. And this is what we are trying to achieve in terms of engagement of the judiciary in the IG internet governance space. Honorable Judge, how can the governments make sure that privacy laws keep up with the fast-changing technology? Because now we have artificial intelligence, we have all these blockchains, and there is a resistance against all four. How can these governments from across various jurisdictions with the fast-changing technology?


Eliamani Isaya Laltaika: Judge, Facilitator, one of the issues that are important for the judiciaries to do is to breathe life to legislation. When a law is enacted, be it at international level or at national level, it is a document which is more or less formless. The definitions are vague, the rights are not understandable. Sometimes if it’s an international instrument, it can stay in the cupboards for many years, but as soon as this law goes to the court, the court gives an interpretation, it breathes life, it makes that law something that people can relate with. And that is what we are trying to do currently with the data protection and privacy laws. For example, the first thing that people do not distinguish out there, lay people and many of our technocrats, is that data protection and privacy are totally different. Many people think data protection and privacy are the same. They are not. Before a judge, a judge knows that privacy is part of the human rights, but data protection and protection of one’s information is a fundamental right. If you look at the European Convention, this is article seven, article eight, and they are totally different. In our country, for example, in Tanzania, all data protection cases start with the commission because it’s a fundamental right that is protected administratively. And right issue, you go directly to the high court. You don’t even go to the administrative procedures. If a lower court recognizes that they have a case touching on fundamental human rights, they immediately ask you to go to the high court. So at the moment, what I’m seeing throughout jurisdictions is that judges are defining concepts. Judges are clarifying concepts. Judges are putting, laying down the foundations of integrating fundamental privacy and data protection laws within the larger fabric. Breaches are done by the private sector. It is the private sector which manufactures these gadgets you are seeing. But it is the role of the government to regulate them. The court stands there as an umpire to say, OK, the manufacturer is responsible for protecting privacy. When you release a headset or a computer or whatever gadget, it is upon the regulator to make sure.


Martin Koyabe: It is stored in digital form in zeros and ones. So what that means, there is a huge fundamental challenge in how we actually handle the digital evidence. The second issue is that it is very volatile. Because if you either change the time when it is stored, if you move it from one point to another, it might actually alter the actual evidence that is attributed to that digital evidence. There are some specific areas that need to be considered. It also traverses boundaries. Because you can carry digital evidence in a USB. You can carry it in a memory stick. You can carry it on any other device. And then move it from one jurisdiction to another. And that brings a huge amount of challenges when it comes to the judgment, when it comes to how we have to handle that issue. So therefore, the processing of evidence has to be based on the principles that have been agreed across jurisdictions. There is also the challenge of the dependency of digital evidence on technology. You will need a device, or at least something that will interpret what is referred to as digital evidence into a form that is admissible. And therefore, the authenticity that the judge talked about of that device being used to interpret that digital evidence becomes fundamental. And therefore, it is important that we understand the realm of association and where we are going when it comes to the issues of evidence that is admissible in court. The other issue that we have to be cognizant of is that when you consider the digital evidence, and when you look at how we need to make sure that we are able to admit it in court, then the preservation of that evidence becomes fundamental. And all this requires what we call expertise. Some expertise in terms of processing, digital evidence officers, and so forth. And then the last point that I wanted to point out is that we have a duty for those of us who work with the local government to ensure that the experts that we have, who provide maybe expertise in courts, who provide what we call the forensic expertise within the judicial system, are compensated well enough so that we can preserve that knowledge in this particular institution. So that’s how I view the whole concept of digital evidence and how fundamental it is when it comes to having court cases that are very, very fundamental. Let me just give you an example, just before I end, of what happened in a country that I know of where there was evidence of a crime that had occurred in a specific room. And this evidence was actually on the screens of the computer. But because the collection was so rudimental, the police and other officers who went to collect this evidence simply plugged the cable from what was on the screen, disappeared, which was critical for judgment within the law courts. So there’s that bit. There’s also the issue around when you issue a court order. In some cases, court orders are supposed to be issued in written form. So some laws are so arcane, they cannot accept any other form of court order unless it has been written and delivered to the individual person who is supposed to appear. So we have fundamental differences in how we need to update our courts, our judges, and also our legal framework. But we’ll come to that issue at some point. Thank you.


Nazarius Kirama: Thank you, Dr. Koyabe. And I think now we are almost going to finish this first segment of the presentations from the speakers. And before we take the questions, I would like to ask Rachel, what steps can policemakers take to ensure people with disabilities and marginalized groups have equal access to digital tools and services?


Rachel Magege: All right. Thank you so much for that question. Here’s what I’m going to say. A lot of frameworks and laws and regulations, subsidiary legislation have already been put in place. And whereas it is good to have a specific law that mentions a specific person or group of people, what I see as more beneficial is for policymakers to continuously remind and maybe even educate the implementers and the regulators to be very creative and deliberate in how they carry out the provisions of the law. Yes, because it is good to say that we want a specific law on digital platforms and the digital innovation space to say this and this about people with disabilities or the elderly and this and that. But if you already have a law, for example, the People with Disabilities Act, yes, and it’s very general, as with many…


Nazarius Kirama: Rachel, I think we lost you. Hello? Anyway, I think we can proceed. Did we lose her? Anyway, I think I will continue with Advocate Umar on… Is Rachel online? No. Can you hear me, sir? Yes, we can hear you.


Rachel Magege: My sincere apologies, the Zoom link just threw me out, but I’m back now. Let me just quickly wrap up my submission on this. So what I was saying was that as long as you have laws that are already in place, the implementers and the regulators have to be very creative in knowing that. For example, I have a provision that says everyone needs to have access to clean and safe water. All right? You as the implementer, as a regulator, are going to go to a place where you will have little children, you will have the elderly, you have people with physical disabilities, maybe may not have people who may have visual impairment, hearing impairment, and things like that. You have to be very creative into looking at all of these different groups and seeing that in order for all of them to get clean water, this person is going to need this, this person is going to need that, this person will need extra assistance in this, but at the end of the day, all of them get water. So it is the very same with digital platforms and with the digital space. And we’re already seeing a lot of these things come up once again, because what you are doing is at the end of the day, the policy makers will get to say, we have created this law. And we’re already seeing that when it is executed and implemented, that maybe young students or young girls or people with disabilities are all having their needs met, but in a different way as required for each of their different needs. So that’s the biggest thing that I would say, Mr. Nazarius. I know Tanzania has already been working on a number of different legislation for the benefit of the members in the audience to know. And currently the ICT is working on a national artificial intelligence framework and strategy. I do remember sitting down with them and with some development partners where we advised them that in as much as you are now with the ministries now conducting different assessments and different needs assessments and impacts in the country, you have to make sure that this law and this framework that you are creating, first of all, aligns with all the other ICT frameworks in the country, and especially with the Data Protection Act of Tanzania, but also make sure that it caters to every different group of people. So that is the feedback that I can give right now. And I know that once a regulator, once an implementer, you know, the commission, original commission, district commissioner, once they are creative in how they carry out this law, it is actually also going to reduce the number of complaints and lawsuits that Dr. Laitaka gets and receives in court. It’s going to reduce that a lot because you will see that this implementer, this government official has actually taken the time to understand and see what the policies have and what different groups of people need and how those different needs can be met. Thank you very much.


Nazarius Kirama: Thank you advocate for, you know, bringing the pieces from the perspective of marginalized and, you know, and sections of the society. Now we are going to go to the audience, but before we go to the audience in the room, we’d like to have a question from Zoom. There is a person from Zoom. There is a question for the court, for the judge. So if…


AUDIENCE: Can you hear me?


Nazarius Kirama: Yes.


AUDIENCE: We have a question for our horrible judge Eliamani. Nana is asking what perspectives are there for the use of AI to support, refine, clarify, enhance, influence decisions for judges? This is a question from an operational point of view.


Nazarius Kirama: Judge, that comes to you.


Eliamani Isaya Laltaika: Thank you very much for that brilliant question from our Zoom attendance. It is now an open secret that AI is used in the courtroom, including by judges and the assistants. However, there are no guidelines or regulations, and there are very, very different perspectives on how jurisdictions are embracing artificial intelligence. Within the East African community, and I’m not going to mention countries here, there is, in one country, a judge, a high court judge was summoned by the disciplinary committee because a part of his judgment was allegedly written by a chat GPT. I think there were a four and a zero that looked like a chat GPT four, and some sentences which were not very legal, and he was called to answer, and many lawyers have been reprimanded, and they got scared. Within the same East African community, just across the board, a chief justice is saying, please embrace generative AI. Make use of them to clarify what presentations by lawyers, and at the end of the day, you are responsible in what you say, but use any tools. Now, we are not sure what happens because these are countries sharing the border, sharing history of development of law from the British. However, luckily, we now have UNESCO. You talked yesterday when we were launching the guidelines for use of AI by judiciaries that is being pioneered by UNESCO. I hope that in the next few months or two years down the lane, we will have clear guidelines. To answer specifically, from my jurisdiction, we are using artificial intelligence in the court in Tanzania for at least four ways. One, scheduling. We no longer put our schedules manually. It is automated. There is electronic management system where I know the cases that will come to my chamber two weeks or next month because they are just automated. Secondly, is assigning cases. The judge in charge is no longer the only person who assigns cases. In the past, people would say, this boss is giving me investigations with advocates who are sole complainants. Nowadays, it is the artificial intelligence which just says, judge one, judge two, judge three. Cases are filed. The artificial intelligence says, lal taika, so-and-so, so-and-so. So you are sure that there is no bias. You handle your file very well. Thirdly, we use artificial intelligence in language translation. Our country uses Kiswahili, which is the national language of the court is English. So if someone speaks, we already have, we have deployed the TTS, transcriber and translation systems where you can transcribe what a person is writing and take back to. And thirdly, we use it intensively for research. And I said yesterday that when the chart GPT started in 2022, 2021, it was very inaccurate. You would get fake cases. It would come up with cases which are not anywhere in the law report. It changed it dramatically. Now you can be very sure that it provides you with cases that have been decided by the Court of Appeal of Tanzania. Only the judge at last to go and say, this is relevant here, this is not. And thirdly, to finalize that, we are actually using AI to predict whether that will get the simplified language of texts, including legislations or acts of violence.


Nazarius Kirama: Thank you, Judge, for your critical intervention. And now I would like to open Q&A to the audience in the room. If you have any question to any of the panelists, you are welcome to see your hands. Please, if you can get the microphone.


AUDIENCE: You hear me?


Nazarius Kirama: Yeah, okay, here you go.


AUDIENCE: Thank you so much. Thank you so much. Really appreciate it. This has been a very interesting conversation. I also attended the other session on AI in judiciary with Honorable Judge as well. I personally learned a lot and thank you for the other panel. I wanted to comment on the fact that there are a lot of people in the room who are not familiar with AI. And I think it’s very important to comment because we tend to talk a lot about a specific context, but there are other countries that they have a different. context, for example, in terms of their maturity to use technology. And I’m from, obviously, I’m from Iraq, and I wanted to talk about, first, the political role in the country to use technology, and the judiciary, I know that they are independent, but they are always affiliated somehow with the political, in line with the political role. Before getting to the question, I wanted to comment what Honorable Judge also said, that it’s unfortunate that we don’t have many judges, actually, or lawyers, or legal practitioners, even in the room, that we are very few. So hopefully, in the next year, we will have more judges and more lawyers and more legal practitioners in the room to share their experiences and views on that. My question will be for the Honorable Judge, because we are advocating to bring all the stakeholders, like in my country, for example, to including the judiciary. For a judge who have been heavily depending on paper, and not using technology, as you mentioned in the beginning of your speech, how would you convince someone who needs to change all this, who needs to spend more time to learn technology, to hire experts to reserve evidence, for example, analyze evidence, and things like that? What are the main three arguments, let’s say, that you will be using when you advocate for that change? And the other question, do you think at the beginning, judges will need, like in addition to investigators, will need someone who will be tech-savvy, for example, to help with all this kind of evidence? Thank you so much.


Nazarius Kirama: A very loaded question, I might say, and I think, is it a consensus as they come, or we take questions first, and then they answer? Which one should we take? As they come. Is that a consensus? I am a democratic moderator.


Rachel Magege: It’s okay with me. Thank you.


Nazarius Kirama: Okay. Thank you so much. Judge, if you can intervene, please.


Eliamani Isaya Laltaika: Okay. Very quickly. Thank you very, very much for that question. Unfortunately, it is true that we are not in isolation. Judiciary, the old-fashioned way of saying separation of powers, I am from the judiciary. My colleague is a minister. We work together. So, I meet my minister and say, okay, look here, this is the law you are proposing. It doesn’t work our way, so do this. That’s what is happening in the U.K. That’s what is happening in the U.S. But if you follow a very strict kind of separation of powers, you will be left, so everyone has to start somewhere. Because judges who are so based on paper and writing in Dar es Salaam can be encouraged to start small. In our country, it is mandatory for every judge to use a computer. You cannot avoid this. The former professor of law at the University of Dar es Salaam is at least ten times techier than me. You can confuse him with a computer scientist, because he talks about data protection and everything, and he’s the one who has brought Tanzania to that level of use of AI. It is true, question two, we first got assistants who are young lawyers. Every judge has one legal research assistant who knows the computer and has been encouraging judges. So, you can welcome you to come to Tanzania to learn. We are welcoming many countries. We get at least ten countries visiting the judiciary of Tanzania in half a year. So you are welcome from Iraq to come and we will deliberate how we can transfer the knowledge so far.


Nazarius Kirama: Thank you, judge.


Rachel Magege: Mr. Nazarious?


Nazarius Kirama: Yes.


Rachel Magege: If I may, I had written on the chat section to add just a very quick response after the honorable judge, if that’s okay.


Nazarius Kirama: Go ahead.


Rachel Magege: Yes. Honorable judge, thank you so much, as always, for your endless wisdom. And to answer this question to the gentleman who asked about what is the political will or the acceptance of these technologies, I do want to say a little something as well, if you may allow me, of course, much to the context of Tanzania in different aspects of Africa. So, because, and I’ll be very honest, because there is already a fear of technology and I’m looking at diverse groups of people, yes? We are here sitting in this room and virtually, we are lawyers, I’m a lawyer, we have judges, we have different technology experts. But outside of us, there are so many people out there, we’re looking at that digital divide. There are people in the rural areas, there are people with different levels even of education and access to education. But these are the same people who in one way or another may find themselves in courts for different matters and different reasons. And here you have the judiciary of Tanzania, for example, already using artificial intelligence. So how do you bridge that gap together? One of the things that I think really helps is also showing people the benefits and the good of AI. It has to be more of a narrative than what they are seeing online and on the internet or hearing from their neighbors, yes? Because if you’re here and telling people or telling me that because of artificial intelligence, a vulnerable judge like Taka can now read large volumes of evidence in a shorter amount of time, that it can help him as he writes his judgments, that is already a good thing. In Tanzania, for example, one of the big benefits of AI that came in just a number of projects, one of them was from the Sokoine University of Agriculture, where a lecturer has been using artificial intelligence to quickly detect diseases in cash crops, in maize, in corn and things like that. So if there is a language to use to already start communicating to lawyers, to court clerks, to judges, to many different members of society, that artificial intelligence can help you predict floods that are coming in your country, can help you do this and that. I think that might be a good way, a good avenue to use to even start creating that political will for people to see and understand that artificial intelligence can be used to make life easier and therefore frameworks to be established and this and that. So thank you so much. That was my additional contribution.


Nazarius Kirama: Thank you, Advocate. I saw a question there and over here. So you start with him and go to the gentleman over there.


AUDIENCE: Thank you. Thank you, Modrito. Sorry to put you on the spot, but I enjoyed your response to that question. Maybe drawing from your experience in Tanzania, it is true really that from when ChargePT started to now, there’s been some level of accuracy, like you said. But even still in 2024, there is still instances, drawing from experience, where we find that the AI systems hallucinate and they tend to present what is not there. So then my first question is, how does the judiciary in Tanzania take care of instances where facts are presented or authorities are presented and they do not really exist? Just on a little bit of background, AI hallucination has given rise to conversations around ensuring the human in and on the loop in the system. So what measures are there in the judiciary to correct, to ensure that materials presented by the AI systems are actually accurate? And the second question, and it’s going to be very short, the focus seems to be on judges from what you just said. We also find that lawyers coming before the courts use ChargePT for their briefs. What measures are there in place in the Tanzanian case to check accuracy in the briefs filed by lawyers in the court? Thank you.


Nazarius Kirama: Thank you. I will also take it to the gentleman, so these answers can be answered.


AUDIENCE: Okay. Thank you. My name is Doron from the UNEGOV. Thank you for this very productive session and opening call to all these relevant questions about judiciary. I will just quickly say, because we mentioned the three branches of government, executive, legislative. There is a common sense that the executive goes with full speed with these digitalization efforts. But judiciaries, there are some countries that catch the pace, but there are plenty of countries where it’s kind of left behind. And this is where the independence is coming to place because judiciary is not strong enough to have to be financially independent to take all the benefits from the digitalization. I will just briefly mention an example. We’re teaching, making courses for public prosecutors in one country. And at the end, the comments was, okay, experts, we know this is all good, data protection works, we know the rules. But the problem is we are ten prosecutors, but we have only three computers. So we wait for one prosecutor to go to court, for the others to work and to write some of the things, which means that we can have access to his evidence and the other, and so much of the privacy. My question first is, how we can push the executive power, the government to understand that the modernization, the capacity building of the judges of the skills must be supported by the government, by the government financially, mostly. And my second one is a more provocative one. How far do you think we can go with the digitalization in the judiciary? I’m saying this on the service supply side. For example, can we allow people to leave their last will on using online service, or like concluding their last will, can they conclude marriages online? How far do you think, where we set the red line, and we say, okay, because there is a reason why you can go and leave your last will in front of the judge and two witnesses to witness that it’s not under pressure. So how far do you think we can go with this digitalization?


Nazarius Kirama: Thank you so much. If we can have the answers, and then we’ll go, you know, to the last segment of our session. We only have like 15 minutes left. So let us keep our, you know, contributions short and intervention short as well. Thank you so much.


Martin Koyabe: Okay. Let me try and answer part of that question, but others, I’ll leave to the judge. The fundamental issue here is the starting point for many countries. And what we are seeing from Iraq and others is the need for having what we call robust frameworks, digital frameworks in the country. So for example, when you look at the case of Tanzania, for example, when they were developing their cyber security strategy, they were very keen that they must have those fundamental tools and instruments embedded within their strategy. So for Iraq, for example, I would urge that in your strategy and framework, you include specific areas of prescribing the type of cases that could arise, being proportionate in terms of the punishment that is allocated, and being able to have those fundamentals within that. The second thing is also to have what we call the human rights component in your framework. Things like freedom of speech, freedom of expression, being embedded within the framework and the structures of the country. And then lastly, also is the area around capacity building. And capacity building, we can’t argue. It is one of the key areas that we need to really look at. So what happens is you’ve got to have what we call the soft approach. So you take the judges. They don’t like going to workshop. They want to go for retreats. So take them where they want to go. Or to impact on the judges in a gradual manner. And take the judiciary, take the legislators, take the executive, and then train them within that concept. So that each of them can see an equitable contribution towards the functioning of what’s supposed to happen. In terms of the budget, which I’ve had here, there has to be a concerted effort politically to be able to support the judiciary to automate its systems. There are so many countries that have broken the backlog of cases. They are really efficient in terms of how they bring also the benefit to the citizens. Because their cases can be heard quicker. There are also cases where platforms have moved. We are now having cases being done online. So the idea about budgetary allocation is critical. But let me also come to the technical people who describe these things to the executives. I think we have a duty as technical people in the room how we describe and also how we explain problems. If you go to a politician, a politician doesn’t want to hear money in the middle of the attack. I don’t know what sort of winning elections is their money. So there’s a way of how you actually interpret these things to the people who make decisions. Like in your case, if you’re trying to convince other people. Thank you.


Nazarius Kirama: Thank you. Judge, if you have anything to add?


Eliamani Isaya Laltaika: Thank you very much. I really do. But I will be brief. The first question is for my brother. Okay. So it’s a new concept. Hallucination is a new concept in AI, generative AI, where the robot simply fails to understand what you’re asking. You ask them what are factual issues related to economy development in Tanzania, and they give you things from Mozambique. You have to tell it several times for it to come back. Hallucination is there to stay. But I want to be positive. Even among lawyers addressing judges, they can experience hallucination. Counsel, did you really mean this? Anyway, on a serious matter, I want to leave on legal issues that gen AI depends on large data and trained on it to become accurate. When it comes to court cases, they are massive, millions. In Tanzania, every single judgment must be uploaded online. If my principal judge gets a report that Judge Laltaika has decided 20 cases today, tomorrow he must see them on the signature and stamp of the court. So if you go to Tanzli, T-A-N-Z-L-I, you will see every case I decided for the past four years. And AI is feeding on this to develop almost exact copy of cases of authorities I need. And that is what is happening in other countries, because judgments are copyright free. The law in Tanzania says if you write anything as a judge, you cannot copyright it. It belongs to the public. That’s the case in Pakistan, in Kenya, everywhere. So out of all other fields, law and AI go so well. In the next 10 years, it will be very easy to just generate something and you get almost what you would have written as a judge. However, the second question, it is upon you to be sure. And in Tanzania, we have 15,000 advocates. These are the guardians of law. If they see anything unusual written by a judge, the screenshot, it is sent all over. Look at this judge. What is he writing? So we are fearful. We are very afraid. So if I generate something from the internet, I will still read it so carefully to make sure that it goes there while it has gone through a factual process. In Tanzania, we got out of this because we started with a strategic plan. We sat down and made a five-year strategic plan. We identified what we need. We handed it over to the executive. The executive said, okay, now we know your needs. In the past, they didn’t know what we need. They would just simply give money this month, next month, but now they have a clear picture of what the judiciary needs in the next 10, 15 years. Lastly, this is a very difficult question to answer from my brother. How far should we go with digitization? If I were to decide, I would go digitization for everything. For example, during COVID, many people were waiting to come and conduct their marriage or meet a judge. The judges are afraid. I could just say, okay, raise your hand where you are. Say this solemnly. I declare you married. Then you go. You will send me a signed copy of your signature. Why should I see you? There is a book by a professor of law from New Zealand, I don’t know, Australia, who is a futurist of virtual courts. He has written how courts will be in the next 50 years. Everything will be online. You will not need to have a lawyer come before you and inspect them as if you are a police officer inspecting. You just need material. We will be working on that. That’s how the world is moving. Thank you.


Nazarius Kirama: Thank you. Thank you, Judge. Now we go back to the last segment of our session. Advocate Umar, I don’t want to shortchange you in terms of questions. I know you have another question here. What challenges do judges face when handling cases about crimes and how can they be addressed?


Umar Khan Utmanzai: Microphone. So it’s a very important thing. Rapidly growing technology. Judges also have certain issues with the technology. In terms of the cases dealing to cyber security or cyber crime, judges are facing the complexity of the technology because judges are not well trained on the things. Like in Pakistan, the judge sitting over here is lucky to be sitting in an area like this. In Pakistan, a judge cannot even sit in public. He is not, meet the people in the public. So those who are judges in the high court are the judges from last, have graduated before 20,000. 1998, 1999, and 2000. So they even do not know about the internet, they do not know about the AI. So there is a complexity of the technology. Along with that, the jurisdictional issues, like if a person is sitting in Pakistan, has been harassed in somewhere outside of Pakistan, a case come before the judge, how will he trial that case? That is one of the biggest issues in the cyber cases. Along with that, the lack of precedent, like we in Pakistan have a prevention of electronics crime, ECPECA, which was passed by the Parliament of Pakistan in 2016. So there are not so many cases for the judges to know the lack of the precedents. Along with that, the evidential challenges, like the cases, the evidence can be altered, can be tempered. So how will a judge look it up to it, the tempered and altered, the forensic? So in so many countries, like the brother has mentioned in Iraq, in the developing state, on the global south countries, there is an issue of the technology along with the forensic. So there are certain issue according to the, that is the speed of the proceeding. Like in my country, in the whole district, there is one, this is one judge who is dealing the cases of cyber crime. So if a person is involved, or is, it will take years. So it is very, also an important factor to speed up, to make maximum number of judges for the cyber crime cases. Along with that, the awareness regarding the emerging threats, law, the technology is changing. Like five years ago, there was no AI. So if the laws were made according to the situation, now we have AI. After the five years, what will be the technology next? So the changing of the emerging threats for the judges. So I believe that the training for the judges, the curriculum for the judges, the cyber, along with that, which is very important, that the students of law should be taught the technology, should be taught regarding the cyber law. Like in Pakistan, I have been graduated from a very renowned law college, but I have never taught the cyber law as a minor subject, as a major subject. So when I came into the field, this is what I learned by myself. So I believe that the cyber law and the things related to the cyber, or the technology, should be included in the curriculum. So this is what my intervention from my side. Thank you.


Nazarius Kirama: And I think we need some kind of, like Dr. Martin Koyabe alluded to, I think we need frameworks. Is there a question from online? And then we’ll go to that lady. As the panelists, please make your final two minutes, prepare your final two minutes contribution, because the session is about to wind up. Yes, there is a lady. If you can take the microphone. Take the microphone to the lady, please. Yes, go ahead.


AUDIENCE: Hello, thank you so much for all the informative presentation that you’ve presented. My name is Hassar Taibi. I’m from Mawadda Association for Family Stability in Riyadh. I have actually not a question, but an input, if you allowed it, please. Despite rapid advancement in digital laws and regulation addressing online rights, judicial system face significant challenge in adapting to fast evolving technologies. Gaps in current legal frameworks hinder the protection of digital freedom of expression, especially in light of cross-border risk, like data breach and discriminatory AI applications. Marginalized groups face heightened barriers to accessing digital justice due to the lack of…


Nazarius Kirama: Is that a question?


AUDIENCE: No, no, no, this is just an input. No, this is just an input, if it’s okay.


Nazarius Kirama: Yeah, if you can keep it short because of time. Yeah, yeah.


AUDIENCE: Yes. Okay, sure. So we call to develop comprehensive legal frameworks, draft laws aligned with the emerging challenge to safeguard digital rights, privacy and transparency, strengthen cross-border judicial collaboration, establish mechanism to coordinate judicial decision across countries for effective handling of cross-border digital issues, and engage multi-stakeholder…


Nazarius Kirama: Thank you so much. And we are going to go to the… Thank you for your intervention. And I think we will share… Yes, yeah, the copy of that. I will share the copy extensively with the participants. Okay, sure. Because of time to go back to the panel, if you can make your parting short, two minutes of summary of today’s session, starting with Uma, please. Thank you.


Umar Khan Utmanzai: So the first thing, I’m just loving it that for the first time somebody and the legal fraternity has taken the responsibility to come. And it’s my fourth IGF, but for the first time I’m having a session on something from the judiciary. And we should include only the judges. It includes the lawyers, it includes the prosecutors, it includes all. So I think the beauty of this panel is that we have it from academia, we have from the judges, we have from the lawyer, we have from the civil society. So I believe that this should continue and I’m hopeful to collaborate with the judges and you guys for the next IGF. We might have judges from Pakistan. The Honorable Judge has talked about the visit to Tanzania. I would love to connect him to some judges who are here in Pakistan. So I believe that this should continue and the judicial system should be empowered. Thank you.


Martin Koyabe: Thank you very much for this session. Let me just make three very important issues here. So the first one is that this initiative, which started in Kigali, and I’m very glad we’re here, has actually got a lot of potential to make sure that we can do, as my colleague Omar said, have inclusivity from different facets, whether it’s from technology, judiciary, and other areas. So that’s something that I think we need to bond from this particular conversation going forward. The other thing is the frameworks that exists within member countries. And let’s try and look at also embedding into our frameworks some of the conventions, like the Budapest Convention and other conventions that are very, very straightforward, the Malabo Convention for Africa and so forth, that can enhance the way we are looking at judiciary and the laws. And then technology is here to stay. Let’s remember one thing, that technology, what technology gives, it can also take in equal measure. So as we move towards technology and embedding technology, let us also put mechanisms in place that can go wrong. Because we have advisories out there, they can attack our systems, they can take over some of the decisions, and that can be very disastrous. So let’s make sure that at every point, we do what we call privacy first, security first, as we implement some of this particular issue. Thank you.


Nazarius Kirama: Judge.


Eliamani Isaya Laltaika: Thank you very much. Three issues. First, what the lady was reading is really fundamental, and I invite you and everyone else, just after this, at 12, we are at the parliamentary track room for another session where IGF is now planning to have judges included. You will hear now this dream-taking face. So you can very much still share that, and if you don’t get an opportunity, please give it a copy to me. I know how to work with it. What you are saying is very important. Secondly, let us all imagine we are like a person who has bought a house, and they move in. They realize that the door doesn’t work, the window is too old, the bed is too small. That is what happened half quarter of the 21st century. We have to demolish a few walls to rebuild them, to accommodate the digital world. I am meeting my minister for communication from Tanzania, and we will be discussing a few things. I will ask him to sponsor a few lawyers next year. If they are not sponsored by IGF, and advocate tomorrow, you’ll be sure that next year you’ll see your fellow advocates here, because I’m meeting the minister. The first thing I’ll tell him, please tell TCRI to sponsor lawyers to come here, because if there is a joke they say, and I will finish with this, is a joke, don’t be offended, please. Are you allowing me to say this joke? Yes, they say, if you are doing anything serious, and you are not including a lawyer, there are only two things involved. Either you are not serious, or what you are doing is not serious. Thank you.


Nazarius Kirama: Thank you so much. Thank you so much for attending our session, and be sure that we continue to interact in the future. And for those who would like to follow us in terms of the global judicial network on internet governance, and school on internet governance for the judges, I think we can share contacts later on as we finish. And thank you for joining us, and thank you for your contribution. I know the time was not sufficient for everybody to be able to contribute as much as she or he would like. I think we’ll take your paper, and make part of our conversation. Thank you so much, and we look forward to collaborate with you guys. Can we have a group photo? Yes, yes, we should have a group photo. Thank you. Getting more crowded. Yes, yes.


E

Eliamani Isaya Laltaika

Speech speed

127 words per minute

Speech length

2902 words

Speech time

1360 seconds

Judges need to understand digital issues to properly adjudicate cases

Explanation

Judge Laltaika emphasizes the importance of judges understanding digital technologies to effectively handle cases in the modern era. He argues that without this knowledge, judges cannot properly protect rights in the digital realm.


Evidence

He gives an example of a judge who cannot tell what a mouse is from another device, illustrating the need for technological literacy among judges.


Major Discussion Point

Importance of Judiciary Engagement in Internet Governance


Agreed with

Nazarius Kirama


Umar Khan Utmanzai


Agreed on

Importance of judiciary engagement in Internet Governance


Judges should embrace AI and other technologies to improve court processes

Explanation

Judge Laltaika advocates for the use of AI and other technologies in the courtroom to enhance judicial processes. He argues that these tools can help judges in various aspects of their work, from scheduling to research.


Evidence

He mentions that in Tanzania, AI is used for scheduling cases, assigning cases to judges, language translation, and legal research.


Major Discussion Point

Importance of Judiciary Engagement in Internet Governance


Agreed with

Umar Khan Utmanzai


Martin Koyabe


Agreed on

Need for updated legal frameworks and training


Differed with

Umar Khan Utmanzai


Differed on

Approach to AI adoption in judiciary


AI systems can “hallucinate” and present inaccurate information

Explanation

Judge Laltaika acknowledges the issue of AI hallucination, where AI systems can generate inaccurate or irrelevant information. He emphasizes the need for judges to verify information generated by AI systems.


Evidence

He gives an example of asking AI about economic development in Tanzania and receiving information about Mozambique instead.


Major Discussion Point

Challenges in Applying Law to Digital Spaces


N

Nazarius Kirama

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Judiciary has been absent from Internet Governance Forum discussions

Explanation

Kirama points out that the judiciary has not been engaged in the Internet Governance Forum since its formation in 2005. He argues that this absence needs to be addressed to ensure proper engagement in debates about internet governance.


Evidence

He mentions that this session is happening for the first time during the lifetime of the Internet Governance Forum.


Major Discussion Point

Importance of Judiciary Engagement in Internet Governance


Agreed with

Eliamani Isaya Laltaika


Umar Khan Utmanzai


Agreed on

Importance of judiciary engagement in Internet Governance


Policies needed to prevent digital exclusion of marginalized groups

Explanation

Kirama emphasizes the need for inclusive digital policies that address the needs of marginalized and disabled communities. He argues that these policies are necessary to prevent digital exclusion.


Major Discussion Point

Inclusivity in Digital Rights


U

Umar Khan Utmanzai

Speech speed

166 words per minute

Speech length

1162 words

Speech time

419 seconds

Legal frameworks need to be updated to address digital rights

Explanation

Utmanzai argues that existing legal frameworks are inadequate to address the challenges posed by digital technologies. He emphasizes the need for laws that can effectively protect digital rights and handle cyber-related cases.


Evidence

He mentions that in Pakistan, the Prevention of Electronic Crimes Act was only passed in 2016, indicating the recent nature of digital rights legislation.


Major Discussion Point

Importance of Judiciary Engagement in Internet Governance


Agreed with

Eliamani Isaya Laltaika


Nazarius Kirama


Agreed on

Importance of judiciary engagement in Internet Governance


Cross-border nature of internet creates jurisdictional issues

Explanation

Utmanzai highlights the challenges posed by the global nature of the internet in legal proceedings. He points out that judges often struggle with determining jurisdiction in cases involving cross-border cyber activities.


Evidence

He gives an example of a person in Pakistan being harassed by someone outside the country, questioning how a judge would handle such a case.


Major Discussion Point

Challenges in Applying Law to Digital Spaces


Lack of precedent in cyber cases creates difficulties

Explanation

Utmanzai points out that the relative newness of cyber laws means there is a lack of legal precedent for judges to rely on. This absence of established case law makes it challenging for judges to make consistent rulings in cyber-related cases.


Evidence

He mentions that in Pakistan, the Prevention of Electronic Crimes Act was passed in 2016, indicating the recent nature of such laws and the consequent lack of precedents.


Major Discussion Point

Challenges in Applying Law to Digital Spaces


Increase judicial training on technology issues

Explanation

Utmanzai advocates for enhanced training for judges on technology and cyber-related issues. He argues that this is necessary to equip judges with the knowledge needed to handle digital cases effectively.


Evidence

He mentions that in Pakistan, many judges who graduated before 2000 lack knowledge about the internet and AI, highlighting the need for training.


Major Discussion Point

Strategies for Improving Digital Rights Protection


Agreed with

Eliamani Isaya Laltaika


Martin Koyabe


Agreed on

Need for updated legal frameworks and training


M

Martin Koyabe

Speech speed

151 words per minute

Speech length

1416 words

Speech time

560 seconds

Digital evidence is complex and requires new skills from judges

Explanation

Koyabe highlights the challenges posed by digital evidence in legal proceedings. He argues that the volatile and easily alterable nature of digital evidence requires judges to have specific skills and understanding to handle it properly.


Evidence

He gives an example of a case where critical evidence was lost because police officers unplugged a computer, causing the evidence on the screen to disappear.


Major Discussion Point

Challenges in Applying Law to Digital Spaces


Agreed with

Eliamani Isaya Laltaika


Umar Khan Utmanzai


Agreed on

Need for updated legal frameworks and training


Frameworks should embed human rights protections

Explanation

Koyabe emphasizes the importance of incorporating human rights protections into digital frameworks. He argues that elements like freedom of speech and expression should be embedded within a country’s digital strategy and structures.


Major Discussion Point

Inclusivity in Digital Rights


R

Rachel Magege

Speech speed

155 words per minute

Speech length

1506 words

Speech time

579 seconds

Gender-based violence can be exacerbated online

Explanation

Magege points out that digital platforms can amplify and exacerbate gender-based violence. She argues that what happens in physical spaces can be mirrored and intensified in online environments.


Major Discussion Point

Inclusivity in Digital Rights


Digital divide affects access to justice

Explanation

Magege highlights the issue of the digital divide and its impact on access to justice. She argues that disparities in technology access and literacy can create barriers to justice in an increasingly digital legal system.


Evidence

She mentions diverse groups including people in rural areas and those with different levels of education who may struggle with access to digital legal services.


Major Discussion Point

Inclusivity in Digital Rights


Show benefits of AI to increase acceptance

Explanation

Magege suggests that demonstrating the positive aspects and benefits of AI can help increase its acceptance in the legal system. She argues that this approach can help bridge the gap between technology and those who fear or resist it.


Evidence

She gives examples of AI being used to detect crop diseases and predict floods, showing its potential benefits beyond the legal system.


Major Discussion Point

Strategies for Improving Digital Rights Protection


A

AUDIENCE

Speech speed

136 words per minute

Speech length

1148 words

Speech time

505 seconds

Develop comprehensive legal frameworks for digital rights

Explanation

An audience member emphasizes the need for comprehensive legal frameworks to address digital rights. They argue that these frameworks should be aligned with emerging challenges to effectively safeguard digital rights, privacy, and transparency.


Major Discussion Point

Strategies for Improving Digital Rights Protection


Strengthen cross-border judicial collaboration

Explanation

The audience member advocates for enhanced collaboration between judiciaries across different countries. They argue that this is necessary for effectively handling cross-border digital issues.


Major Discussion Point

Strategies for Improving Digital Rights Protection


Agreements

Agreement Points

Importance of judiciary engagement in Internet Governance

speakers

Eliamani Isaya Laltaika


Nazarius Kirama


Umar Khan Utmanzai


arguments

Judges need to understand digital issues to properly adjudicate cases


Judiciary has been absent from Internet Governance Forum discussions


Legal frameworks need to be updated to address digital rights


summary

The speakers agree that the judiciary needs to be more involved in Internet Governance discussions and that judges require a better understanding of digital issues to effectively handle related cases.


Need for updated legal frameworks and training

speakers

Eliamani Isaya Laltaika


Umar Khan Utmanzai


Martin Koyabe


arguments

Judges should embrace AI and other technologies to improve court processes


Increase judicial training on technology issues


Digital evidence is complex and requires new skills from judges


summary

The speakers concur that legal frameworks need to be updated to address digital challenges, and that judges require specialized training to handle technology-related cases effectively.


Similar Viewpoints

Both speakers highlight the challenges posed by digital evidence and AI in legal proceedings, emphasizing the need for judges to have specific skills to handle these technologies effectively.

speakers

Eliamani Isaya Laltaika


Martin Koyabe


arguments

AI systems can “hallucinate” and present inaccurate information


Digital evidence is complex and requires new skills from judges


Both speakers emphasize the importance of addressing the digital divide and ensuring that marginalized groups have access to digital services and justice.

speakers

Nazarius Kirama


Rachel Magege


arguments

Policies needed to prevent digital exclusion of marginalized groups


Digital divide affects access to justice


Unexpected Consensus

Positive aspects of AI in the legal system

speakers

Eliamani Isaya Laltaika


Rachel Magege


arguments

Judges should embrace AI and other technologies to improve court processes


Show benefits of AI to increase acceptance


explanation

Despite concerns about AI hallucination, both speakers unexpectedly advocate for showcasing the benefits of AI in the legal system to increase its acceptance and improve processes.


Overall Assessment

Summary

The main areas of agreement include the need for judiciary engagement in Internet Governance, updating legal frameworks to address digital challenges, providing specialized training for judges, and addressing the digital divide to ensure inclusive access to justice.


Consensus level

There is a high level of consensus among the speakers on the importance of integrating the judiciary into Internet Governance discussions and the need for capacity building. This consensus implies a strong recognition of the challenges posed by digital technologies in the legal realm and a shared commitment to addressing these challenges through education, training, and policy updates.


Differences

Different Viewpoints

Approach to AI adoption in judiciary

speakers

Eliamani Isaya Laltaika


Umar Khan Utmanzai


arguments

Judges should embrace AI and other technologies to improve court processes


Judges are facing the complexity of the technology because judges are not well trained on the things


summary

Judge Laltaika advocates for embracing AI in judicial processes, while Utmanzai highlights the challenges judges face due to lack of training and understanding of technology.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to integrating technology in the judiciary and the extent of training required for judges.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but have slightly different perspectives on implementation. This suggests a general consensus on the importance of addressing digital rights in the judiciary, which is positive for advancing the topic.


Partial Agreements

Partial Agreements

All speakers agree on the need for judges to understand digital technologies, but they differ in their approaches. Laltaika emphasizes general understanding, Utmanzai focuses on specific training, and Koyabe highlights the need for skills in handling digital evidence.

speakers

Eliamani Isaya Laltaika


Umar Khan Utmanzai


Martin Koyabe


arguments

Judges need to understand digital issues to properly adjudicate cases


Increase judicial training on technology issues


Digital evidence is complex and requires new skills from judges


Similar Viewpoints

Both speakers highlight the challenges posed by digital evidence and AI in legal proceedings, emphasizing the need for judges to have specific skills to handle these technologies effectively.

speakers

Eliamani Isaya Laltaika


Martin Koyabe


arguments

AI systems can “hallucinate” and present inaccurate information


Digital evidence is complex and requires new skills from judges


Both speakers emphasize the importance of addressing the digital divide and ensuring that marginalized groups have access to digital services and justice.

speakers

Nazarius Kirama


Rachel Magege


arguments

Policies needed to prevent digital exclusion of marginalized groups


Digital divide affects access to justice


Takeaways

Key Takeaways

The judiciary needs to be more engaged in Internet Governance discussions and processes


Judges require training and tools to properly handle digital evidence and cyber cases


Legal frameworks need to be updated to address emerging digital rights issues


Cross-border collaboration is needed to address jurisdictional challenges in cyber cases


AI and other technologies can improve court processes but also present new challenges


Inclusivity and protection of marginalized groups must be considered in digital rights policies


Resolutions and Action Items

Launch of the Judiciary Global School on Internet Governance to train judges on IG issues


Plan to include more judges and legal practitioners in future IGF meetings


Tanzanian judge to request government sponsorship for lawyers to attend next IGF


Unresolved Issues

How to balance judicial independence with the need for technology adoption


Extent to which court processes should be digitized (e.g. online marriages)


How to address AI hallucination and ensure accuracy of AI-generated legal information


Funding and resource allocation for judiciary digitization in developing countries


Suggested Compromises

Gradual introduction of technology in courts, starting with scheduling and case assignment


Use of legal research assistants to help judges navigate new technologies


Framing digitization benefits in terms of efficiency and citizen service to gain political support


Thought Provoking Comments

Within the East African community, and I’m not going to mention countries here, there is, in one country, a judge, a high court judge was summoned by the disciplinary committee because a part of his judgment was allegedly written by a chat GPT.

speaker

Judge Eliamani Isaya Laltaika


reason

This comment highlights the real-world challenges and controversies surrounding the use of AI in judicial processes, illustrating the tension between technological advancement and traditional legal practices.


impact

It sparked a deeper discussion about the ethical implications and practical challenges of integrating AI into judicial systems, leading to considerations of guidelines and regulations for AI use in courts.


In Tanzania, we got out of this because we started with a strategic plan. We sat down and made a five-year strategic plan. We identified what we need. We handed it over to the executive. The executive said, okay, now we know your needs.

speaker

Judge Eliamani Isaya Laltaika


reason

This comment provides a concrete example of how to effectively implement technological changes in the judiciary, emphasizing the importance of strategic planning and collaboration with the executive branch.


impact

It shifted the conversation towards practical solutions and strategies for digital transformation in the judiciary, encouraging other participants to consider similar approaches in their own contexts.


So for example, when you look at the case of Tanzania, for example, when they were developing their cyber security strategy, they were very keen that they must have those fundamental tools and instruments embedded within their strategy.

speaker

Martin Koyabe


reason

This comment emphasizes the importance of integrating cybersecurity considerations into national strategies, highlighting a proactive approach to addressing digital challenges.


impact

It broadened the discussion to include the importance of comprehensive national digital strategies, encouraging participants to consider how legal frameworks, cybersecurity, and digital rights can be integrated at a policy level.


In Pakistan, a judge cannot even sit in public. He is not, meet the people in the public. So those who are judges in the high court are the judges from last, have graduated before 20,000. 1998, 1999, and 2000. So they even do not know about the internet, they do not know about the AI.

speaker

Umar Khan Utmanzai


reason

This comment provides a stark contrast to the more technologically advanced judicial systems discussed earlier, highlighting the significant disparities in digital literacy and access among judges in different countries.


impact

It brought attention to the global inequalities in judicial digital literacy, prompting a discussion on the need for targeted training and capacity building for judges in less technologically advanced jurisdictions.


Overall Assessment

These key comments shaped the discussion by highlighting the complex challenges of integrating technology into judicial systems across different contexts. They moved the conversation from theoretical discussions about digital rights to practical considerations of implementation, training, and policy development. The comments also underscored the global disparities in judicial digital literacy and access to technology, emphasizing the need for tailored approaches in different countries. Overall, these insights deepened the conversation, making it more nuanced and action-oriented, while also broadening its scope to consider diverse global perspectives on judicial engagement with digital technologies.


Follow-up Questions

How can frameworks be developed to guide the use of AI in courtrooms?

speaker

Eliamani Isaya Laltaika


explanation

There is a lack of guidelines or regulations for AI use in courtrooms, leading to inconsistent approaches across jurisdictions.


What measures can be implemented to check the accuracy of AI-generated content in legal briefs?

speaker

Audience member


explanation

There is concern about lawyers using AI tools like ChatGPT for their briefs without verifying the accuracy of the generated content.


How can the executive branch be encouraged to financially support the modernization and capacity building of the judiciary?

speaker

Audience member (Doron from UNEGOV)


explanation

Many judiciaries lack the financial independence to fully benefit from digitalization efforts.


What are the appropriate limits for digitalization in the judiciary, particularly for sensitive legal processes?

speaker

Audience member (Doron from UNEGOV)


explanation

There is a need to determine which judicial processes can be safely digitalized and which require in-person interactions.


How can legal education be updated to include more technology and cyber law components?

speaker

Umar Khan Utmanzai


explanation

Many law graduates lack formal education in cyber law and technology, which is increasingly important in legal practice.


What strategies can be employed to enhance cross-border judicial collaboration on digital rights issues?

speaker

Audience member (Hassar Taibi)


explanation

There is a need for better coordination of judicial decisions across countries to effectively handle cross-border digital issues.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #283 Breaking the Internet Monopoly through Interoperability

WS #283 Breaking the Internet Monopoly through Interoperability

Session at a Glance

Summary

This discussion focused on the monopolistic practices of big tech companies and the concept of interoperability as a potential solution. The speaker, Batool Almarzouq, began by highlighting the immense wealth and power of major tech corporations like Apple, Microsoft, and Facebook. She explained how these companies maintain their dominance through strategic acquisitions, leveraging legal power, and creating barriers to competition.

Almarzouq emphasized the importance of interoperability in fostering innovation and breaking monopolies. She provided historical examples, such as IBM’s decline in the PC market and Apple’s challenge to Microsoft’s office suite dominance, to illustrate how interoperability can disrupt monopolistic control. The speaker outlined three types of interoperability: cooperative, indifferent, and adversarial, with a focus on the challenges of implementing adversarial interoperability.

The discussion touched on the implications for the Global South, where countries face the dual burden of adopting tech-friendly laws favoring US corporations and using technology ill-suited to local conditions. Almarzouq stressed the need for legal changes to support interoperability and protect users’ rights to modify and control their technology.

In the group discussions that followed, participants explored various aspects of interoperability. One group focused on the implementation of the EU’s Digital Markets Act and its potential impact on messaging services. Another group discussed the need for regulatory approaches to address data privacy violations and the importance of transparency from big tech companies. The discussions highlighted the complex challenges in implementing interoperability and the need for localized approaches to tech regulation.

Keypoints

Major discussion points:

– The monopolistic power of big tech companies and how they maintain dominance

– The importance of interoperability in technology and how it can break monopolies

– Implications for the Global South in terms of technology access and adaptation

– Potential solutions like new laws and rebuilding internet infrastructure

Overall purpose:

The goal was to examine how big tech companies have gained and maintained monopoly power, discuss the importance of interoperability, and explore potential solutions to create a more open and equitable technological landscape, especially for the Global South.

Tone:

The tone was primarily informative and analytical, with the speaker presenting research and concepts in an academic style. There was an underlying tone of concern about the current state of tech monopolies and their global impact. The tone shifted to be more collaborative and reflective during the group discussion portion at the end.

Speakers

– Batool Almarzouq: Computational biologist, open science advocate, working at the Alan Turing Institute

– Ghaidaa Alshanqiti: Online facilitator for the session, Civil Society, Western European and Others Group (WEOG)

– Ian Brown: Discussed interoperability in the European Union

– Panelist: Unnamed panelist who summarized discussion from a group including colleagues from Russia and Kenya

Additional speakers:

– Anne Steele: Mentioned as one of the organizers of the session

– Abdul Latif Talwan: Mentioned as one of the organizers of the session

Cory Doctorow: Author of the book “Internet Con”, referenced in the discussion

– Jason Heckel: Anthropologist quoted in the discussion

Full session report

Expanded Summary of Discussion on Tech Monopolies and Interoperability

Introduction:

This discussion focused on the monopolistic practices of big tech companies and the concept of interoperability as a potential solution. The primary speaker, Batool Almarzouq, a computational biologist and open science advocate from the Alan Turing Institute, presented research and concepts in an academic style. The event was organized by Batool Almarzouq, Anne Steele, Abdul Latif Talwan, and Ghaidaa Alshanqiti.

1. The Rise and Dominance of Big Tech Companies:

Batool Almarzouq began by highlighting the immense wealth and power of major tech corporations. She noted that “the top 1% owns 43% of the world’s GDP… while two corporations actually control 40% of the global seed market itself.” Apple’s market value, for instance, exceeds the GDP of many countries.

Almarzouq outlined several ways these companies maintain their dominance:

a) Strategic acquisitions: Rather than competing through innovation, big tech companies often buy out potential competitors.

b) Leveraging legal power: They use various legal strategies to maintain market dominance, including the DMCA Section 1201 and trademark laws.

c) Creating barriers to competition: This includes tactics like non-compete clauses.

d) Network effects and switching costs: These make it difficult for users to switch platforms.

She also pointed out that antitrust enforcement changed significantly in the 1970s and 1980s, allowing these monopolies to form and grow.

2. The Importance of Interoperability:

Almarzouq emphasised the role of interoperability in fostering innovation and breaking monopolies. She provided historical examples:

a) IBM’s decline in the PC market: This occurred when IBM PC clones entered the market, increasing interoperability.

b) Apple’s challenge to Microsoft’s office suite dominance: This was made possible through Apple’s iWork suite and interoperability.

The speaker outlined three types of interoperability:

a) Cooperative: Where companies willingly work together (e.g., email protocols).

b) Indifferent: Where one company doesn’t actively help others.

c) Adversarial: Which was the focus of the discussion.

3. Legal and Business Tactics Preventing Interoperability:

Almarzouq explained that big tech companies now use various legal and technical barriers to prevent interoperability, especially adversarial interoperability. This has made it increasingly difficult for new competitors to enter the market or for users to switch between platforms.

4. Implications for the Global South:

The discussion briefly touched on the significant challenges faced by countries in the Global South. Almarzouq highlighted a “double burden” these countries face:

a) They are forced to adopt tech-friendly laws that primarily benefit US corporations.

b) They must use technology ill-suited to local conditions, as it’s often designed for different contexts.

This situation has led to local developers creating better-adapted versions of apps, highlighting the need for technology that considers local needs and contexts.

5. Potential Solutions and Regulatory Approaches:

Several potential solutions were discussed:

a) New laws supporting interoperability: The EU’s Digital Markets Act was cited as an example, mandating interoperability for messaging apps and potentially extending to social networking services.

b) Rebuilding internet infrastructure: This was suggested as a long-term solution to create a more open internet.

c) Collective advocacy: Almarzouq emphasised the need for users to collectively push for better interoperability.

Ian Brown discussed interoperability in the European Union, noting the high resource requirements for implementing interoperability, especially for smaller organisations.

6. Group Discussions and Additional Perspectives:

Following the main presentation, participants broke into groups to explore various aspects of interoperability. They were asked to discuss challenges, opportunities, and potential solutions related to interoperability. Key points from these discussions included:

a) Implementation challenges of the EU’s Digital Markets Act, particularly for messaging services.

b) The need for regulatory approaches to address data privacy violations by big tech companies.

c) The importance of transparency from big tech companies regarding their protocols and algorithms.

d) The need for localisation of terms of service for different markets and cultures.

e) Challenges faced by organizations like ICRC in using platforms like Teams while maintaining interoperability.

A panelist summarising discussions from a group including colleagues from Russia and Kenya emphasised data privacy violations and financial statement violations by big tech companies.

Jason Heckel’s quote about the economy and relationships based on extraction, exploitation, or care was highlighted as a key insight from the discussions.

Conclusion:

The discussion highlighted the complex challenges in implementing interoperability and the need for localised approaches to tech regulation. It emphasised the importance of collective action, regulatory intervention, and innovative solutions to address the dominance of big tech companies and create a more open, equitable technological landscape, especially for the Global South.

The organizers plan to produce a report about the session and share it with participants, encouraging further exploration of the topics discussed.

Session Transcript

Batool Almarzouq: I apologize for the delay, and thank you so much, really, for joining in this very early hour, and if it’s you, Audi, thank you so much for coming all the way to Saudi. For people who joining online, thank you for making the time to join. I want to acknowledge that this session is organized not just by me, you can just see me here in the room, but it’s organized by four people, so it’s organized by the wonderful Anne Steele, Abdul Latif Talwan, and also Ghida, who is actually our online facilitator. This is not going to be a panel. It’s not similar to the other sessions I’ve probably been to in the conference so far. So the plan, I’ll be speaking in 25 minutes at the very beginning to give some sort of introduction and speaking more about the topic, then we’re going to move into table discussion for about ten minutes. So people who are in the room, I’m going to gather you in like groups probably, and I’ve got notebooks and pens, and people who are online, we’re going to have a Google Doc that you’re going to use to share your reflections and some questions that I’ll be presenting. Again, this is the last day of the conference, so we want everyone to connect, to make a friend. network with the people next year. And the last thing, also, that I want to say, that this talk is a bit heavy with legal concept that I’m going to try to simplify. But also, I want to make sure if the sound is not good. So OK. Hopefully, the sound now is way better. So as I said, the talk is going to be a bit heavy with legal concept. But I’m going to try to make it very clear. I’m not a lawyer. And none of us are lawyers. So any information or data shared during the talk is cited. The slide is also going to be available today online. So please go back to this information and all of these citations. I’m a computational biologist by training. And I’m an open science advocate. I’m Saudi, but I’m based in the UK. And I’m currently working at the Alan Turing Institute, which is the UK National Institute for Data Science and AI. And I come from academic point of view to this topic. So what I’ll be trying to do is encouraging or trying to reflect on a question about how we encourage growth, innovation, and tech without harm, what decolonization actually looked like for the global south. And my intention, really, not to focus on the legal concept or technical one, but more about the implication and what it means for us. So the first question, why bother, actually, with the internet monopoly? Why are we speaking about this in the first place? We are all aware of the disparity between the global south and the global north when it comes to resources, wealth, domination, extractions. To get more sense of what monopoly looked like, let’s start with this stat. According to Oxfam, the top 1% owns, actually, 43% of the world’s GDP. percent of all the global financial assets, while two corporations actually control 40% of the global seed market itself. This is how serious it is. Looking at other stacks within the tech itself, again all the figures that I’m going to be presenting about the big tech are calculated in 2021 and cited with the citation here, but some might have it changed, but I don’t expect these changes to be huge or very far from these figures. So let’s start with Apple. Apple at work exceeds actually GDP of many countries like Italy, Brazil, Canada, Russia. In fact there are only seven countries in the world with higher GDP than Apple, 2.2 trillion, meaning that this tech giant is richer than a lot of the world’s population or nations. If Apple was actually a country, it might take the place of the eighth richest country in the world. That’s how serious it is. Let’s look at another example, Microsoft. Microsoft is not different. It’s 1.8 trillion market cap, but it’s really the third highest or largest company in our list, and it makes GDP on par with Canada and even richer than many developed economy like Australia, Spain, Indonesia. Actually if you take Apple and Microsoft and you combine them together, the GDP is even going to be higher than what Central Europe put together. Again, Amazon is without doubt also one of the big tech giants worldwide and has seen very, very growth in their sales during the COVID-19 and the market capitalization is 1.6 trillion, makes that company also one of the richest of the 90% of the countries within the globe. And you can see it also in this map. Again, last example, Facebook. It’s not in the top 10 with the highest value, but also has a market gap of $763 billion. And it’s had this very, very huge global player in the world stage. If Facebook were a country, it would be wealthier than Switzerland, Sweden, and the UAE combined. That’s how huge it is. In terms of GDP. So I think the question, the reason why I’m showing this sort of start, in the past, there was this dynamic and chaotic wave with startups. So people start as a startup. They rose to the peak. They become household name within just a few years. Only, sorry, I wasn’t moving this. So there was a dynamic and chaotic wave of startup. They rose to the peak, becoming household names within just a few years. And then they disappeared just as they reached the peak. And that was a very, very, very normal cycle. However, this cycle was interrupted when the new giant did not just get big, they stayed big. And then they got bigger, and bigger, and bigger. So that’s what the interruption is happening. And that’s what we’re trying to understand. So probably the question is, how did this big tech become big? And did they stay big without going through the same cycle? And I think this is a very, very, very tricky question. I’m sure some of you in the audience have different answers. There is no very clear answer. So I’m going to be mainly really focusing and using some analysis presented by Corey in his latest book, Internet Con. But again, this is something that I’ve been for debate. And I just want to highlight a few things about this big tech company. Again, when I speak about them, this is not to criticize. besides anyone who affiliated with these companies, is just more about to try to reflect about a lot of these practices, how we can sort of understand its implication into our society. So one of the things that really huge or big tech is doing, the big five tech company made hundreds and hundreds of acquisitions. We don’t have exact number. Some say it’s over 600 from 2010 to 2019. They were not tiny purchases. Each was worth over $1 million. And that shows how aggressively they were buying up other companies. When these people come and buy these companies, founders of the startups has to sign an agreement. We call this agreement non-competing clause. And this agreement prevent them from creating competing product. So effectively, they sort of try to stop this startup from, or these innovators, from starting a new rival company. And that kept talent and idea locked within the big tech companies. And many of these acquired actually a company with a startup, so like under five years old. And that suggests that this big tech was buying potential competitors before they could grow. And that pattern of eliminating this competition early before it becomes serious is one of these factors that actually contributed for these big techs staying big. For example, we all remember Facebook acquisition of Instagram and WhatsApp back in 2012. Instagram, while becoming very, very super popular, especially because they figured out how to make their social media work great on phones, something that Facebook. Facebook was a bit struggling with. So rather than Facebook trying to fix that problem, Facebook actually brought to Instagram for $1 billion. And at that time, people thought that was crazy money for an app that has no income. But Facebook knew what they were doing at that time. Then in 2014, they saw also WhatsApp becoming huge, huge in messaging. So they went even bigger, spending, I believe, $20 billion to buy WhatsApp. So Facebook’s own internal documentation showed that they were worried about people actually leaving their platform. And they knew it was very, very hard for a user to switch because all of their friends, their network, were already in Facebook. So fast forward today, this strategy has worked incredibly, incredibly well for Facebook, and now Meta. We call Facebook Meta. So they control the social media for about 4 billion people worldwide. And that making Meta basically an official ruler for half of the world’s digital social life. So I think what I’m trying to say, instead of winning by making things better product, they win by buying up anyone who might have a compete with them. And because regulators, I mean, antitrust enforcement I’m going to speak about in a bit, did not stop these purchases, Facebook has been able to keep doing this, leaving users with fewer choices and less innovation in social media. The other factors that actually contributed to big tech staying big or becoming giant in the first place is the death of antitrust enforcement. And I’m going to be focusing more about some of these in the American economy. So back in 1890, America created strong laws. We called them the Sherman Act. to prevent any company from getting too, too powerful. And the thinking was pretty straightforward. Just like you don’t want one person having too much political power, you don’t want also one company controlling too much of the markets. So this is something that, within the American system and the legal system, they have thought about it from the very, very beginning, since the 1980s. But everything changed in the 1970s when a guy named Robert Bork came along with a new idea. He basically said monopolies were fine as long as they did not raise the prices for consumers. And this was a huge, huge change from the original law, which was meant to stop any company from getting too, too big. And the idea caught fire at the University of Chicago back then, where economists created these fancy sort of models to prove that monopoly were actually good for everyone. And these models were so, so complicated that only a creator could understand them, which was perfect for a big company who would hire these same economists as consultants to justify these mergers. Things really took off in the 1980s. And then President Reagan and the US government basically stopped enforcing these antitrust laws. And that thinking spread globally through leaders like Margaret Thatcher in the UK. And the results, what we see today, we see monopolies everywhere, from airline, from eyeglasses to candy companies. So this is why Facebook could actually just buy Instagram and WhatsApp instead of competing with them. For example, it took 69 years for constant effort to break up the old telephone monopoly, AT&T, people who are in the United States familiar with it. But today, similar monopolies face practically no threat of being broken up. even have to prove they’re not monopolies anymore. They just hire economists to argue why their monopolies is actually good for everyone. This change in how we handle monopoly has completely transformed the American economy. So instead of fighting big companies getting bigger, we now celebrate it. And that’s the big reason why tech company have been able to grow so, so incredibly powerful. The last thing that helped big tech is how they leverage also legal power. So think of big tech companies as a players who not just know how tech game, but also know the legal game. They have different legal protection with enough money for lawyers, they can batch up any weakest spot by combining different laws in very, very clever way. One of the favorite laws is something called the MCA. Especially section 1201, which is a pretty, pretty intense. So if you, that section tell you if you break any digital lock, even for perfectly legal reason, you face five years in prison and 500,000 fine. So company can stop these digital lock, they can actually add these digital lock in their product and then call out anyone to try to modify or work with them. They basically make it a crime to mess with their business model. So take Apple, for example, they bought this tiny Apple logo and iPhone parts inside of the iPhone itself. Then they claim it’s a trademark violation if you use or you change some of these parts for repair. So between this and this, they control who gets to repair their devices and how. They also use complicated contract, those terms of service that nobody reads. Plus agreement that’s still. employees from working for competitors or talking about the company secret. So it’s very, very powerful when they combine all these legal tools in a way that stops any sort of competitors. So these companies are not just building teams for engineers anymore, they’re building also armies of lawyers, and if anyone tries to compete with them, they don’t need just to make their product better. They have to also understand the legal system until the other party or side actually try to give up. And this is not just in America, these are trade agreements, they spread and they run worldwide. I’m going to speak about their implication in the global south in particular. So some might feel this is very, very normal, this is how competition should be, but my takeaway is that technology has changed very, very dramatically from something anyone could think or improve into something that is controlled by a giant company. So these days having great, great technical skills is not enough, you need also a deep, deep bucket of legal battle if you want to innovate in tech. So this takes me in how tech used to grow in the past. By the ability of computer system or tech or software to exchange and make use of information. And we call that interoperable system. So actually, as Corey put it, the technological world that we inhabit today was profoundly shaped by the ability of newcomers to hack interoperable ad blocks and features into the technology that came before them. And actually, this is how all tech used to develop and work this way. So this is how Microsoft got big, this is how all of these big giants actually got big in the first place. So let’s take two examples, just what I mean by interoperable. One around hardware, another one around software. So let’s speak about a breaking hardware monopoly with IBM. People who are older than me, they remember IBM in the 1970s, 80s. IBM was this king of computing. They were so, so, so powerful that people called them the big blue. But IBM had a problem back then, had been fighting antitrust case with the Department of Justice for around 12 years. And during that battle, IBM had to reduce every memo, attend countless, countless of meetings with lawyers who were paranoid about this antitrust violation, about this big sort of violation. And this scrutiny made IBM extremely, extremely cautious back then about anything that might look monopolistic. So to avoid the trouble back then, IBM made two key decisions to avoid further antitrust scrutiny. So one thing they’ve done, they used commodity bots, like common bots, that any manufacturer could buy in the open market. And they also decided not to make their own VC operating system. Instead, they were getting it from Microsoft. And just to give some sort of Microsoft back then was not this big tech giant. It was really, really young, young company back then. So IBM voluntarily embraced interoperable component and operating system. And then something very, very interesting happened. A small company called Phoenix Computers figured out how to reverse engineer the one part of IBM that still IBM has as unique or manufactured by them, which is the ROM chip that made the computer work. And that meant the other company could actually now make complete IBM-compatible computers. So it can be companies like Dell. gateway, started making this VC clone that could run all the same programs as IBM computer, but cost less money and had better, better features. And because of this interoperability, IBM monopoly crumbled, and they eventually got out of the VC business entirely. And the interesting is that this decision helped break IBM’s monopoly even more effectively actually than the antitrust case itself, where they were fighting for 12 years. Just to, so that’s an example how you break with interoperable system actually, big giant. Another example that probably many people here are very familiar with it is Apple versus Microsoft battle on the office suite. So in the early 2000s, Microsoft had begun this new tech monopoly controlling 95% of the disk of computing through Windows. So actually the first computer that I had back then is like Microsoft and has this office that I use it for everything. And Microsoft used this power to dominate office software with the Microsoft office and created a big, big problem for Apple. Because Microsoft office software does not work for Macs, or it works extremely, extremely terribly, making it hard for Mac users to share files with Windows users. Let’s say I’ve made a presentation in Windows, if I give it to my colleague who’s using Mac back then, they could not operate it or make it work. So most people use Windows back then, which means buying Mac was very, very risky. You might not be able to work with your colleague file. So that they create it in Microsoft office like Word document. So Apple found a very, very clever way. Instead of begging Microsoft… to make better Mac software, they worked to reverse engineer the Microsoft file format. So they created something called iWork, their own Office Suite, which I believe many of you are familiar with it, which read and write Microsoft Office file. And suddenly, Mac users could actually freely exchange file with Windows users. And even better, Windows users could switch to Mac without losing access to their old file. And this helped save Apple and eventually forced Microsoft to make their file format public standard that anyone could use. So what I would like actually to highlight from this story, how interoperable can it break these monopolies in two ways? Letting your competitors make compatible hardware, like IBM VC clones, or compatible software, like Apple’s iWork. And in both cases, the monopoly company could not maintain that control once users had real choice that wouldn’t keep them away from their file or program. This is why tech giants today fight so, so hard to prevent interoperability. They’ve learned from history that one of the biggest threat to domination is actually making their hardware or software interoperable itself. So there are three distinct types of interoperable. As I mentioned, there’s cooperative interoperable, so a company willing to work together, follow these agreed on standards, like email protocol. For example, Gmail can send to Apple Mail because they share the standard based on a mutual agreement and cooperation. There is indifferent interoperability, which one company doesn’t actually actively help up like others. But the one that I would like to focus more about is adversarial interoperability. So creating compatible alternative. but created against the original company wishes. And this often involved reverse engineering, which is very, very challenging. So original company actively tried to prevent it, and that’s the example I gave with Apple iWork and Microsoft Office itself. So I think the question that, what new challenge we found when we’re trying to do this adversarial interoperability is the same as the one I’ve mentioned previously. So big company used strategy to stay at the very, very top. Legal tools and business track. So as I mentioned, the DMCA section 1201 represent one of the most powerful tool against interoperable system. Company can add simple digital lock to their product and then use DMCA to legally attack anyone who can try to modify this interoperable system. They also use the trademark, but they also, there is a business trick that keep users stuck within their service. As I mentioned, the simplest approach is like buying any company that might become a competitor with you. And the clever part is how this, they combine all these strategies. So they combine this legal, business tactic, technical one. So it’s not enough for the competitor to get vast win barrier. They have to break down multiple, multiple barriers of a protection. So even if someone is very, very smart and they figured out how technically connect with a big tech service and reverse engineer anything, they face a lawsuit and there is other, other sort of flyers that they have to overcome. So to break both type of barriers, we need really new laws that make it legal to connect with and modify existing service. And we need rules that make it easier for user to leave services without losing everything they’ve been actually trying or build. over there. So going back to the network effect we see in social media, and big tech get big through what we call the network effect. So you join Facebook because there is what everyone share photos, update, and then your friend join because you are there, and their friend join because they are there, and the cycle continue. And that’s how really what we call it, like network effect, that affect how tech companies grow massive user base very, very quickly. But getting big is not enough. They need to keep people from leaving. This is where switching codes come in. So imagine trying to leave Facebook. You would lose all of your photo, your messages, your event invitation, your business contact, and even your connection to a friend and family who only use Facebook. And the cost of switching to another service is so, so high that people stay, even if they are not happy with the service itself. Because if you move, you really lost all of your data, all of your photos, friends, network. So it’s like being. And here’s what gets very, very interesting. Technically speaking, there is no reason why you couldn’t have a different social media service that could still talk to Facebook. So all computers can run any program. And that’s just how computers are actually supposed to work or work in a practice. Like how you use Gmail to email someone using Yahoo email. They could theoretically have a different service that lets you keep in touch with your Facebook friend without actually using Facebook. But big companies don’t want to make that happen, since we can’t make it technically impossible because of how computers work. They use legal barrier instead. And they create laws, contracts, digital locks that make that very, very, very complicated. And this is why fixing the problem required change. changing law, not just building better technology. And the technology to break free from big tech control already exists, but the legal barrier is that will keep us very, very locked in. Again, how do we support interoperability? There is, think of today internet like a bunch of separate castle, like Facebook, Google, Apple, et cetera, keeping everyone trapped inside. And the immediate solution about creating doors in this world. So one of the very encouraging things that have been happening is the European Union already been doing the Digital Market Act, which basically tells big companies, you must let other companies connect to your service. It’s like forcing Facebook to let you talk to your Facebook friend, even if you don’t have a Facebook account, or even if you use a very different social network account. Another quick fix is allowing what we call a competitive compatibility. So that means let an engineer figure out how big tech service works and create compatible alternatives, like how Apple figured out how to reverse engineer and make their iWork software read Microsoft Office file. Right now, this is the kind of reverse engineer. It’s often very illegal. And making it legal would immediately give users more and more choices. The longer term solution that we can speak about is actually rebuilding the internet to be more open from the ground up. This starts actually with fixing our antitrust law, which currently lets very, very big companies buy up all their competitors, like Facebook, buying Instagrams and WhatsApp. We also need technical standards that make different services work together by design. And most importantly, I would say we need to protect people’s right to modify and control their own technology. And right now, if you buy an iPhone, I will control what app you get to install, like through App Store. And the long-term fix is giving this control back to the users. So we need both kind of solution, quick fix to give people more choices right now, and the long-term changes to make sure that big tech cannot lock everything down again in the futures. And this really, really matters, because if we can’t fix big tech control over our digital life, we’re going to be struggling to fix many, many other problems in our modern world. So we need open, fair technology to organize, communicate, and work together in all of our challenges. I want to touch very, very light implication on the global south, which is extremely, extremely huge. And that’s what actually of interest to us. So the global south face double, double burden. It’s forced into passing tech laws friendly only to US corporation. And the second part, they have to use technology designed for each country that often does not work well in your local conditions. So the technology itself often does not work well in this country. And imagine trying to download a huge, huge software update when your internet actually is very slow or expensive. I’ve been asked, I assume you always have power and internet when you don’t actually in practice. Again, people in this country find a very clever way around this problem. For example, in Syria during the civil war, someone created a better version of WhatsApp that worked better for local need. And Indonesia also drive who modified this ride-sharing app to make them work better in their city by reverse engineering both of them. And this solution works because local developers and understand local need better than Silicon Valley companies. So making technology more interoperable, meaning anyone can modify it to work better for their need, this would let country in the global south adapt technology to their local condition instead of being forced to use solution designed by Europe or the US. And this matters, because technology today is like electricity, actually. You need it to participate in modern world. But right now, global south are forced to use technology that is not built for them, while being legally prevented from adapting this technology or reverse engineer to their need itself. So these days, I see myself actually reflecting a lot on many of the problems we have in our technology and our politics, which always takes me back to the economy, to the growth. So I want to leave. This is the last slide. I want to leave. Slide does not change for some reason. It’s interesting. It’s not changing. OK, it’s fine. This last slide, I want to leave with this quote from Jason Heckel, who is an amazing anthropologist and has written a lot about de-growth economy, about climate change, about technology. He said that the economy is really this material relationship with what will surround us. And we have to decide whether we want that relationship to be based on extraction, exploitation, or in care, which is something that we leave out when we think about long-term impact of what we do currently. Probably a good reminder for ourselves that everything that we do goes back in circle to the same people who actually started it. So, I will move to questions and reflection, which we will do as a group. I’m having a problem with changing the slide completely. Would you be able to change it from your side? Or is that the last one that you have? Can you change it as well? Okay. Thank you. So, we have, I believe, around, like, 15 minutes to questions and reflection. What I want to try to do, I want to try to do something very new. I want to do some sort of group. Because what I want to do is also collect these sort of feedback, reflection into some sort of report about the session itself. We’re going to do the same with the people online. So I’m going to try to collect people in different groups for around ten, seven minutes. Then each group is going to introduce themselves, they’re going to get to know each other. This is the day before the last of the conference. So make friends, know people around you. And then I do have a few questions that we’re going to reflect on these questions. I’m also not very fond of these headphones, actually. I don’t like speaking to people in headphones when you have them, like, in front of you. It’s a different vibe for interacting and hearing people directly. So I’m going to give you some notebooks as well that we have in the back. And people who are online, we have a Google Doc provided. And we’re probably going to do a breakout room to collect all of their thoughts. And this discussion, what we’re going to be doing is we’re going to report back, like, all of these discussions, thoughts and reflection in the mic so we can have it in the transcript itself. Would you be able to change the slide one more? Okay. So that’s the last slide. So things that I want to start with is, again, each one is going to introduce themselves. to the group, and then after the introduction, choose one of these questions that resonate more with you or something that you want to sort of, you have any thoughts around it. The first question is, you don’t have to answer all of these, just one of them. Like, have you ever felt locked in into a technology service despite wanting to leave, and what kept you actually there? The second question, how might, if we make interoperability something that mandatory, how that going to change the balance of power between users and tech companies, and how might actually that affect innovation in technology? The third question, what challenges do you foresee in implementing this interoperability as a compulsory? The last question, how could users actually in your country, because we’re coming from different places, collectively advocate for better interoperability? So we’re going to do that in the last sort of five minutes. Would that be OK? Let me just try and take this away. I’m going to check with the online people if they are OK. Yeah, right now, would you be able to open breakout room and add people there, and then give them the Google Doc, please? I’m going to add the Google Doc again, just in case for people who did not see it from the very beginning.

Ghaidaa Alshanqiti: Thank you, everyone. I see the breakout room and the online, they also sort of finished. So what we’re going to do now is just that we’re going to have someone to report from each group. I’m very conscious of the time. We have another session immediately after us. So what I’m going to do, I’m going to hand it to group one and then group two. And I will see if we can do the online. If we couldn’t, we’re probably going to just catch up with the Google Doc. So I’m going to send it to everyone later on.

Ian Brown: Hello everyone, I’m Ian Brown. We were talking about interoperability in the European Union. First of all, the Digital Markets Act has been in force for over a year and therefore messaging services, particularly WhatsApp, have to enable interoperability to other messaging services. What we’ve seen so far is it’s taking some time for other organisations to develop the technology. Multiple companies are, although we don’t know who they all are yet because they haven’t said publicly, one challenge has been one open source organisation has been trying to build WhatsApp interoperability into their service based on Matrix, which is an open source protocol, but they found that the resource requirements are currently too much for them to do that without a specific customer needing it and being willing to finance it. So that hasn’t happened yet. The EU is also considering extending this legal requirement to social networking services like Facebook. That was debated when the law was passed. The European Parliament wanted that to happen, but the EU governments would not allow it to happen. But after a three year review, that may happen next time. And then the other point we discussed was our colleague from the ICRC. They are using Teams globally for meetings with 20,000 people, you said, a lot of people, but they do talk to organisations like the UN, for example, via Zoom sometimes, the European Commission, occasionally with WhatsApp, so interoperability could be a useful tool here to let different organizations using different tools bridge without having to install all the different software in all the different places. The one key point, of course, is the security policy of the ICRC would have to also be met by these other organizations. Thank you.

Batool Almarzouq: Thank you so much for that. Thank you.

Panelist: So thank you very much. I’ll give an overview. So our three colleagues, one from Russia, two of us from Kenya, we looked at several issues with respect to interoperability. So we are seeing that big tech companies are engaged in data privacy violations, financial statement violations. So we had a discussion and we had the opinion that we need to have regulatory approaches to intervene in such violations. There is need for transparency by the big tech companies in terms of, you know, explaining more about the protocols that they’re using, issues of compatibility. There’s need for transparency in using the technology. So we discussed and agreed that there’s much more neutrality that is required to ensure that these platforms are interoperable. We’re seeing that issues of big data analytics and the use of algorithms to mine data from users to try to influence our purchasing choices. So there’s need also for the big tech companies to have localization of their laws or their terms of use for the different markets in which they operate. Because many times you find that they apply their own local laws in different jurisdictions and it causes some kind of conflict because different parts of the world have different cultures and therefore different content might be offensive in one part of the world and not the other. Thank you.

Ghaidaa Alshanqiti: Thank you so much for sharing such interesting reflection and thoughts. What we’re gonna be doing is if you have taken notes I was I wanted also to share the one in the online but I’m very very conscious of the time so what we’re gonna be doing is we anyone who shared written route if they can give it to me or even their contact we will be sort of sending all of these reflection to everyone who sort of joined discussion online or in person so we’re going to be producing some sort of report about the session itself and share it with everyone that again has also everyone’s name that if they comfortable sharing their contact and names really really thankful for everyone who joined especially the one also online and for that also for facilitating the online discussion thank you so much right now and with that we’re gonna be closing the session so we allow the next session to actually prepare thank you

B

Batool Almarzouq

Speech speed

148 words per minute

Speech length

5372 words

Speech time

2166 seconds

Market capitalization of big tech exceeds GDP of many countries

Explanation

Big tech companies like Apple, Microsoft, and Amazon have market capitalizations that exceed the GDP of many countries. This demonstrates the immense economic power these companies wield on a global scale.

Evidence

Apple’s market cap exceeds GDP of countries like Italy, Brazil, Canada, Russia. If Apple was a country, it would be the 8th richest in the world.

Major Discussion Point

The rise and dominance of big tech companies

Agreed with

Panelist

Agreed on

Big tech companies have excessive market power

Acquisitions and non-compete clauses eliminate competition

Explanation

Big tech companies aggressively acquire potential competitors and use non-compete clauses to prevent founders from creating rival products. This strategy helps them maintain market dominance by eliminating competition early.

Evidence

Facebook’s acquisition of Instagram and WhatsApp for $1 billion and $20 billion respectively.

Major Discussion Point

The rise and dominance of big tech companies

Weakening of antitrust enforcement allowed monopolies to form

Explanation

Changes in antitrust enforcement since the 1970s, particularly in the US, have allowed big tech companies to form monopolies. This shift in policy has transformed the economy and allowed tech giants to grow incredibly powerful.

Evidence

Robert Bork’s influence on antitrust thinking, Reagan administration’s reduced enforcement of antitrust laws.

Major Discussion Point

The rise and dominance of big tech companies

Agreed with

Panelist

Agreed on

Need for regulatory intervention

Legal strategies used to maintain market dominance

Explanation

Big tech companies use various legal tools and strategies to maintain their market dominance. These include digital locks protected by laws like DMCA, trademark claims, and complex contracts.

Evidence

DMCA Section 1201, Apple’s use of trademarks on internal iPhone parts to control repairs.

Major Discussion Point

The rise and dominance of big tech companies

Interoperability historically helped break up monopolies like IBM

Explanation

In the past, interoperability played a crucial role in breaking up tech monopolies. IBM’s monopoly was effectively challenged when other companies could create compatible hardware and software.

Evidence

IBM’s decision to use commodity parts and third-party operating systems, Phoenix Computers’ reverse engineering of IBM’s ROM chip.

Major Discussion Point

Interoperability and its impact on tech monopolies

Agreed with

Ian Brown

Agreed on

Importance of interoperability

Adversarial interoperability challenged Microsoft’s dominance

Explanation

Adversarial interoperability, where companies create compatible alternatives against the wishes of the original company, has been effective in challenging tech monopolies. Apple’s creation of iWork to compete with Microsoft Office is an example of this strategy.

Evidence

Apple’s development of iWork to read and write Microsoft Office files, breaking Microsoft’s monopoly on office software.

Major Discussion Point

Interoperability and its impact on tech monopolies

Legal and technical barriers now prevent interoperability

Explanation

Today, big tech companies use a combination of legal tools and technical barriers to prevent interoperability. This makes it difficult for competitors to create compatible alternatives or for users to switch services easily.

Evidence

Use of DMCA Section 1201, trademark laws, and complex contracts to prevent interoperability.

Major Discussion Point

Interoperability and its impact on tech monopolies

Global South forced to use tech not designed for local needs

Explanation

Countries in the Global South are often forced to use technology designed for different contexts, which may not work well in local conditions. This creates challenges for users in these countries and limits their ability to adapt technology to their needs.

Evidence

Difficulties in downloading large software updates with slow or expensive internet connections.

Major Discussion Point

Implications for the Global South

Local developers create better-adapted versions of apps

Explanation

In some cases, local developers in the Global South have created modified versions of popular apps to better suit local needs. This demonstrates the potential for locally-adapted technology solutions.

Evidence

Modified version of WhatsApp created in Syria during civil war, Indonesian ride-sharing app modifications.

Major Discussion Point

Implications for the Global South

Collective advocacy for better interoperability

Explanation

Users in different countries could collectively advocate for better interoperability in technology. This could help address some of the challenges faced by users, particularly in the Global South.

Major Discussion Point

Challenges and solutions for interoperability

Differed with

Ian Brown

Differed on

Approach to interoperability implementation

I

Ian Brown

Speech speed

117 words per minute

Speech length

291 words

Speech time

148 seconds

EU Digital Markets Act mandates interoperability for messaging apps

Explanation

The European Union’s Digital Markets Act requires messaging services like WhatsApp to enable interoperability with other messaging services. This legislation aims to increase competition and user choice in the messaging market.

Evidence

WhatsApp is required to enable interoperability with other messaging services under the Digital Markets Act.

Major Discussion Point

Interoperability and its impact on tech monopolies

Agreed with

Batool Almarzouq

Agreed on

Importance of interoperability

Resource requirements for implementing interoperability are high

Explanation

Implementing interoperability can be resource-intensive, particularly for smaller organizations or open-source projects. This can create challenges in realizing the goals of interoperability mandates.

Evidence

An open source organization found the resource requirements too high to implement WhatsApp interoperability without specific customer financing.

Major Discussion Point

Challenges and solutions for interoperability

Differed with

Batool Almarzouq

Differed on

Approach to interoperability implementation

P

Panelist

Speech speed

137 words per minute

Speech length

226 words

Speech time

98 seconds

Need for regulatory approaches to address privacy violations

Explanation

Big tech companies are engaged in data privacy violations and financial statement violations. There is a need for regulatory approaches to intervene and address these violations.

Major Discussion Point

Challenges and solutions for interoperability

Agreed with

Batool Almarzouq

Agreed on

Need for regulatory intervention

More transparency needed from big tech companies

Explanation

There is a need for greater transparency from big tech companies regarding their protocols, compatibility issues, and use of technology. This transparency is crucial for ensuring fair practices and interoperability.

Major Discussion Point

Challenges and solutions for interoperability

Need for localization of terms of service for different markets

Explanation

Big tech companies often apply their own local laws in different jurisdictions, causing conflicts due to cultural differences. There is a need for localization of terms of service to better suit different markets and cultures.

Evidence

Different parts of the world have different cultures, and content might be offensive in one part of the world and not in another.

Major Discussion Point

Implications for the Global South

Agreements

Agreement Points

Big tech companies have excessive market power

Batool Almarzouq

Panelist

Market capitalization of big tech exceeds GDP of many countries

Big tech companies are engaged in data privacy violations, financial statement violations

Both speakers highlight the immense economic power and influence of big tech companies, which extends beyond national boundaries and leads to various violations.

Need for regulatory intervention

Batool Almarzouq

Panelist

Weakening of antitrust enforcement allowed monopolies to form

Need for regulatory approaches to address privacy violations

The speakers agree that there is a need for stronger regulatory approaches to address the dominance of big tech companies and their violations.

Importance of interoperability

Batool Almarzouq

Ian Brown

Interoperability historically helped break up monopolies like IBM

EU Digital Markets Act mandates interoperability for messaging apps

Both speakers emphasize the importance of interoperability in challenging tech monopolies, citing historical examples and current legislation.

Similar Viewpoints

Both speakers highlight the need for increased transparency and scrutiny of big tech companies’ practices to ensure fair competition and protect user rights.

Batool Almarzouq

Panelist

Legal strategies used to maintain market dominance

Need for transparency from big tech companies

The speakers agree that technology and policies need to be adapted to local contexts, particularly in the Global South, to better serve users in different regions.

Batool Almarzouq

Panelist

Global South forced to use tech not designed for local needs

Need for localization of terms of service for different markets

Unexpected Consensus

Challenges in implementing interoperability

Batool Almarzouq

Ian Brown

Legal and technical barriers now prevent interoperability

Resource requirements for implementing interoperability are high

While both speakers advocate for interoperability, they unexpectedly agree on the significant challenges in implementing it, including legal, technical, and resource barriers.

Overall Assessment

Summary

The speakers generally agree on the excessive power of big tech companies, the need for regulatory intervention, the importance of interoperability, and the necessity of adapting technology to local contexts.

Consensus level

There is a high level of consensus among the speakers on the main issues discussed. This consensus suggests a shared understanding of the challenges posed by big tech dominance and the potential solutions, which could provide a strong foundation for developing policy recommendations and regulatory frameworks to address these issues.

Differences

Different Viewpoints

Approach to interoperability implementation

Batool Almarzouq

Ian Brown

Collective advocacy for better interoperability

Resource requirements for implementing interoperability are high

While Batool Almarzouq suggests collective advocacy for better interoperability, Ian Brown highlights the high resource requirements as a challenge for implementation, especially for smaller organizations.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the implementation of interoperability and the focus of regulatory approaches.

difference_level

The level of disagreement among the speakers is relatively low. The speakers generally agree on the need for interoperability and regulatory intervention, but they emphasize different aspects and challenges. This suggests a nuanced understanding of the complex issues surrounding big tech dominance and interoperability.

Partial Agreements

Partial Agreements

Both speakers agree on the need for regulatory intervention, but they focus on different aspects. Batool Almarzouq emphasizes the legal strategies used by big tech to maintain dominance, while the Panelist focuses on addressing privacy violations.

Batool Almarzouq

Panelist

Legal strategies used to maintain market dominance

Need for regulatory approaches to address privacy violations

Similar Viewpoints

Both speakers highlight the need for increased transparency and scrutiny of big tech companies’ practices to ensure fair competition and protect user rights.

Batool Almarzouq

Panelist

Legal strategies used to maintain market dominance

Need for transparency from big tech companies

The speakers agree that technology and policies need to be adapted to local contexts, particularly in the Global South, to better serve users in different regions.

Batool Almarzouq

Panelist

Global South forced to use tech not designed for local needs

Need for localization of terms of service for different markets

Takeaways

Key Takeaways

Big tech companies have achieved and maintained dominance through acquisitions, legal strategies, and weakened antitrust enforcement

Interoperability historically helped break up tech monopolies, but is now prevented by legal and technical barriers

The EU Digital Markets Act mandates interoperability for messaging apps, which could change the balance of power between users and tech companies

Global South countries face challenges in using technology not designed for their local needs

There is a need for more transparency and localization from big tech companies

Resolutions and Action Items

Produce and share a report summarizing the session’s reflections and discussions with all participants

Unresolved Issues

How to effectively implement interoperability given high resource requirements

How to address data privacy violations by big tech companies

How to ensure neutrality and compatibility across different platforms

How to balance localization of tech services with global operations

Suggested Compromises

Implementing both quick fixes (like the EU’s Digital Markets Act) and long-term solutions to address tech monopolies

Allowing competitive compatibility to give users more choices while working on rebuilding a more open internet

Thought Provoking Comments

According to Oxfam, the top 1% owns, actually, 43% of the world’s GDP. percent of all the global financial assets, while two corporations actually control 40% of the global seed market itself. This is how serious it is.

speaker

Batool Almarzouq

reason

This statistic provides a striking illustration of the extreme concentration of wealth and market power, setting the stage for the discussion on tech monopolies.

impact

It framed the subsequent discussion by emphasizing the scale and seriousness of monopolistic control in the global economy.

Instead of winning by making things better product, they win by buying up anyone who might have a compete with them.

speaker

Batool Almarzouq

reason

This succinctly captures a key strategy used by big tech companies to maintain dominance, challenging the notion that their success is purely based on innovation.

impact

It shifted the conversation to examine the business practices of tech giants, rather than just their products or services.

There are three distinct types of interoperable. As I mentioned, there’s cooperative interoperable, so a company willing to work together, follow these agreed on standards, like email protocol. For example, Gmail can send to Apple Mail because they share the standard based on a mutual agreement and cooperation. There is indifferent interoperability, which one company doesn’t actually actively help up like others. But the one that I would like to focus more about is adversarial interoperability.

speaker

Batool Almarzouq

reason

This breakdown of different types of interoperability provides a nuanced framework for understanding how tech systems can interact, highlighting the complexities involved.

impact

It deepened the technical understanding of interoperability and set up the subsequent discussion on the challenges and potential solutions in this area.

The global south face double, double burden. It’s forced into passing tech laws friendly only to US corporation. And the second part, they have to use technology designed for each country that often does not work well in your local conditions.

speaker

Batool Almarzouq

reason

This comment brings attention to the often-overlooked challenges faced by developing countries in the global tech landscape, highlighting issues of technological colonialism.

impact

It broadened the scope of the discussion to include global inequities in tech development and regulation, leading to reflections on how interoperability could benefit the Global South.

The EU is also considering extending this legal requirement to social networking services like Facebook. That was debated when the law was passed. The European Parliament wanted that to happen, but the EU governments would not allow it to happen. But after a three year review, that may happen next time.

speaker

Ian Brown

reason

This comment provides insight into ongoing regulatory efforts in the EU, showing how interoperability requirements are being considered for expansion.

impact

It grounded the theoretical discussion in current policy developments, offering a concrete example of how interoperability might be mandated in practice.

Overall Assessment

These key comments shaped the discussion by providing a comprehensive overview of the issues surrounding tech monopolies and interoperability. They moved the conversation from broad economic concerns to specific tech industry practices, then to technical aspects of interoperability, global implications, and finally to current regulatory efforts. This progression allowed for a multi-faceted exploration of the topic, touching on economic, technical, social, and policy dimensions.

Follow-up Questions

How can users in different countries collectively advocate for better interoperability?

speaker

Batool Almarzouq

explanation

This was posed as a discussion question to explore country-specific strategies for promoting interoperability

What challenges exist in implementing mandatory interoperability?

speaker

Batool Almarzouq

explanation

Understanding potential obstacles is crucial for effectively implementing interoperability policies

How might mandatory interoperability change the balance of power between users and tech companies, and how might it affect innovation?

speaker

Batool Almarzouq

explanation

Exploring the potential impacts of interoperability on power dynamics and innovation in the tech industry

How can the resource requirements for implementing interoperability (e.g. for WhatsApp) be addressed for smaller organizations?

speaker

Ian Brown

explanation

This highlights a practical challenge in implementing interoperability that needs further investigation

How can security policies of different organizations be aligned to enable interoperability between communication tools?

speaker

Ian Brown

explanation

This is important for enabling secure interoperability between organizations using different communication platforms

What regulatory approaches are needed to intervene in data privacy and financial statement violations by big tech companies?

speaker

Panelist from group 2

explanation

This area requires further research to develop effective regulatory strategies

How can transparency be improved regarding the protocols and algorithms used by big tech companies?

speaker

Panelist from group 2

explanation

Greater transparency is needed to understand and address issues related to data use and user influence

How can big tech companies better localize their terms of use for different markets and cultures?

speaker

Panelist from group 2

explanation

This is important for addressing conflicts arising from applying uniform policies globally

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session | Dynamic Coalitions

Session at a Glance

Summary

This discussion focused on the contributions of Dynamic Coalitions to the implementation of the Global Digital Compact’s (GDC) five key objectives. Representatives from various Dynamic Coalitions presented how their work aligns with and supports these objectives, which include bridging digital divides, expanding digital economy inclusion, fostering safe and inclusive digital spaces, advancing responsible data governance, and enhancing global AI governance.

The speakers highlighted the diverse range of issues addressed by Dynamic Coalitions, from internet rights and children’s online safety to data governance and AI ethics. They emphasized the importance of multistakeholder engagement and the need to include marginalized voices in shaping internet governance policies. Several coalitions are working on issues such as digital inclusion, accessibility for persons with disabilities, and environmental sustainability in the digital realm.

Participants discussed the potential for Dynamic Coalitions to play a significant role in the implementation of the GDC and the broader sustainable development agenda. They stressed the importance of collaboration between coalitions and the need to focus on tangible outcomes and impacts. The discussion also touched on the importance of data-driven policymaking and the need to prioritize data integrity and integration.

The session concluded with a call for Dynamic Coalitions to be proactive in setting the agenda for the next Internet Governance Forum and to continue their work in supporting the objectives of the GDC. Participants emphasized the open nature of Dynamic Coalitions and encouraged wider participation in their activities to help shape the future of internet governance.

Keypoints

Major discussion points:

– Dynamic coalitions are working to implement the Global Digital Compact (GDC) objectives and Sustainable Development Goals

– Dynamic coalitions address issues like bridging digital divides, expanding digital inclusion, fostering safe online spaces, data governance, and AI governance

– Dynamic coalitions provide a platform for multistakeholder engagement and year-round IGF activities

– There are opportunities for dynamic coalitions to collaborate and set the agenda for GDC implementation

– Dynamic coalitions can contribute concrete outputs and recommendations to inform policymaking

Overall purpose:

The purpose of this discussion was to highlight how IGF dynamic coalitions are contributing to the objectives of the Global Digital Compact and sustainable development, and to explore how they can further engage in GDC implementation.

Tone:

The tone was informative and collaborative, with speakers providing overviews of dynamic coalition work and participants offering suggestions for future engagement. There was an enthusiastic and optimistic tone about the potential for dynamic coalitions to contribute meaningfully to internet governance processes. The tone became more action-oriented towards the end, with calls to set concrete agendas and outputs for upcoming IGF meetings.

Speakers

– Jutta Croll and Irina Soeffky: Moderators of the session

– Mark Carvell: Co-moderator of the session, former UK government digital policy official, senior policy advisor with Dynamic Coalition on Internet Standards, Security, and Safety (IS3C)

– Muhammad Shabbir: Member of Internet Society’s Accessibility Special Interest Group, member of Pakistan ISOC chapter, member of Digital Coalition on Accessibility and Disability (DCAD)

– Olivier Crepin-Leblond: Founder and investor of WAF lifestyle app, chair of Dynamic Coalition on Core Internet Values

– Tatevik Grigoryan: Co-chair of Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators

– Yao Amevi Amnessinou Sossou: Project Leader and Research Associate, the Institute of International Management at FH JOANNEUM Graz

Full session report

Dynamic Coalitions and the Global Digital Compact: Aligning Efforts for Digital Progress

This session explored how Dynamic Coalitions (DCs) contribute to the implementation of the Global Digital Compact’s (GDC) five key objectives. Representatives from various DCs presented their work in relation to these objectives, demonstrating the breadth and depth of their impact on internet governance.

Introduction and Session Purpose

The moderator, Jutta Croll, introduced the session by explaining its aim: to showcase how DCs’ work aligns with and supports the GDC objectives. She emphasized that DCs are already actively working towards these goals and invited anyone interested to join their efforts.

Objective 1: Bridging Digital Divides

The challenges faced by Small Island Developing States (SIDS) in digital development were also highlighted. Participants emphasized the need for DCs to address the increasing polarization and marginalization in internet policy and strategy, particularly for SIDS and other underrepresented regions.

Objective 2: Expanding Digital Economy Benefits

Muhammad Shabbir discussed how various DCs contribute to this objective:

– DC on Financial Inclusion: Promoting access to financial services

– DC on Open Education: Enhancing educational opportunities

– DC on Accessibility: Ensuring digital inclusivity for persons with disabilities

– DC on Environment: Addressing environmental impacts of digitalization

Objective 3: Fostering Safe and Inclusive Digital Spaces

Olivier Crepin-Leblond outlined the work of several DCs in this area:

– DC on Internet Rights and Principles: Promoting human rights online

– DC on Child Online Safety: Protecting children’s rights in the digital space

– DC on Youth: Engaging young people in internet governance

– DC on Internet Standards, Security and Safety: Enhancing online security

He emphasized that at least one-third of internet users are under 18, highlighting the importance of the UN Convention on the Rights of the Child in the digital context.

Objective 4: Advancing Responsible Data Governance

Tatevik Grigoryan discussed the contributions of DCs working on:

– Data governance frameworks

– Artificial Intelligence ethics and regulation

– Internet universality indicators

She highlighted the potential for these coalitions to contribute to data-driven policymaking and national assessments.

Objective 5: Enhancing Global AI Governance

Yao Amevi Amnessinou Sossou presented the work of DCs related to AI governance:

– DC on Gender and Internet Governance: Addressing gender biases in AI

– DC on Internet of Things: Exploring AI applications in connected devices

– DC on Blockchain: Investigating AI’s role in distributed ledger technologies

– DC on Digital Health: Examining AI’s impact on healthcare

Discussion and Audience Participation

The session included valuable contributions from audience members:

– Dr. Rajendra Pratap Gupta emphasized the need to focus on job creation and connecting the unconnected. He also noted the absence of big tech companies at this IGF.

– An audience member stressed the importance of data integrity and integration.

– Dr. Gupta highlighted the potential of gaming and gamification for social good across various sectors.

– A representative from the Creators Union of Arab announced an intellectual property verification platform.

WSIS+20 Review and Dynamic Coalitions

Mark Carvell mentioned the upcoming WSIS+20 Review, emphasizing its relevance to the work of Dynamic Coalitions and their potential contributions to this process.

How to Join Dynamic Coalitions

Jutta Croll provided information on how interested individuals can join Dynamic Coalitions, emphasizing their open nature and encouraging wider participation.

Conclusion

The session demonstrated the crucial role Dynamic Coalitions play in addressing the GDC objectives and broader internet governance issues. By fostering multistakeholder engagement and tackling a wide range of topics, DCs are well-positioned to shape the future of internet governance. The discussion highlighted the need for continued collaboration, focus on tangible outcomes, and inclusion of diverse voices in the process.

Key takeaways:

1. Dynamic Coalitions are actively contributing to all five GDC objectives.

2. There is a need for greater inclusion of underrepresented regions, particularly SIDS.

3. DCs provide a platform for year-round engagement on internet governance issues.

4. The upcoming WSIS+20 Review presents an opportunity for DCs to contribute their expertise.

5. Wider participation in DCs is encouraged to enhance their impact and representation.

Session Transcript

Irina Soeffky: This session builds really well on last year’s Dynamic Coalition main session. It was called The Internet We Want, Human Rights in the Digital Space to Accelerate the SDGs. And it was really an ideal platform for sustainable digital development discussions. It also fits into the topic of this IGF very well. It has one sub-team which is particularly fitting for what we do today or intend to do today. It’s improving digital governance for the Internet we want. And with that broad picture, I hand it over to my colleague Mark who will tell you a bit more about what is about to happen here on stage.

Mark Carvell: Okay. Thank you very much, Irina. And good morning, everybody. Thank you very much for coming at this early hour during a busy week here. We’re in Riyadh at the IGF. It’s much appreciated. And welcome everybody who’s following us today online. It’s very good for you to have you with us today. As Irina said, I’m sharing the moderation with her for this session. My background is the UK government on digital policy going all the way back to 2005, and the Tunis summit of the World Summit of the Information Society. So I’ve been to most IGFs. And since leaving the UK government, I’ve continued to be engaged with the IGF community as a member of a dynamic coalition, IS3C, which is the Dynamic Coalition on Internet Standards, Security, and Safety. I’m a senior policy advisor with that dynamic coalition. And we’re one of the coalitions that we’ll cover this morning. So I have that role. And also, I’m working with EURODIG, the European Regional IGF, on the global digital compact. I’ve been chairing the consultations undertaken by EURODIG on that and inputting into the co-facilitators consultations with stakeholders on behalf of EURODIG. So it’s great to co-moderate this session, which I think will highlight, in many ways, how dynamic coalitions do have a major potential role in the global digital compact and the follow-up following its signature in New York and entering now the implementation phase with an endorsement process and so on. So the dynamic coalitions are standing ready, really, to assist with all that. And what we’re doing here today is providing them with the opportunity to explain how they do connect with the global digital compact process and the sustainable development agenda as well. There are 31 dynamic coalitions currently, covering a diverse range of technology, governance, sectoral and public policy issues, opportunities and challenges. They are very focused, year-round IGF activities, staffed by volunteers. And we will advocate people who are interested today in following up, potentially as a member of a coalition. It’s very easy to do. You just sign up, basically. It will be a great opportunity for stakeholders who want to get involved through the dynamic coalitions. We’ll talk about that right at the end. But as I say, the coalitions do cover a wide range of issues. And 21 of the 31 coalitions immediately stepped forward when the coordination group of the dynamic coalitions said, you know, we’re going to have a main session which is focused on the global digital compact and sustainable development. They stepped forward and said, look, we are doing work which is highly relevant to the scope of the compact and so on. So what we’re doing here today is we’re going through the objectives, which Irina recounted for us at the beginning, one by one with representatives of the clusters of dynamic coalitions that have said this particular objective is the one that we are the most potentially engaged in. Giving you a quick explanation of what those coalitions are for each objective. And then we’ll have a little bit of, in the panel, just a little bit of discussion for each objective and then after that we will then really open it out to everybody who’s taking part in an interactive discussion. So save your questions, comments, reactions and anything else you want to raise with any of the speakers or any of the representatives of the Dynamic Coalitions when we reach that part of today’s session. We want to hear from you, we want to know what you think the coalitions can do, perhaps more, and areas where they can collaborate amongst themselves. That’s very important to bear in mind. There are coalitions that do work in the similar sort of sectoral areas and they can usefully collaborate in this whole environment of the Global Digital Compact. So that’s what we’ll do. I think I’ve probably said enough. We will want to try and define some messages and potential outcomes from this session and Jutta Kroll, who helped to organise this session, is ready with us to come and support Irina and me in wrapping up with some potential ways forward, recommendations, what we might do in terms of engaging as a coalition community with the whole GDC process. We’ll do that at the end, in about 10 minutes. So the emphasis is on participation, hearing from you. We have a bit of a download of information, we’ll go through that as succinctly as possible, but that’s a preface really for hearing from you. I think I’ll stop there with the explanation. I hope that’s all clear. Back to you, Irina, for kicking off with our objectives.

Irina Soeffky: It’s a pleasure to first introduce June Paris, who unfortunately cannot be here today, but luckily she’s with us online. And it’s really hard to present or to introduce a person like June in just 30 seconds, which is almost impossible. She’s not only an experienced nurse, but she’s also engaged in groundbreaking research. Lately she has worked on nutrition in pregnant women. She has extensive experience in business development, research, and startup involvement, as well as business support and volunteer work. I could go on forever now, but I’m glad to hand it over to her and she will talk about the possible contributions of dynamic coalitions to the Global Digital Compact’s objective of building digital, of bridging digital divides. Over to you, June. Can you hear us? We can’t hear you yet. We can’t hear you yet, so maybe there is someone, some technical person that can help us to fix the Internet and make it open, accessible, and everything else. If not, we could also wait and maybe you try to fix the technical problems and we start with our second speaker. Perfect. Then I hand it over to you, Mark.

Mark Carvell: Okay, back to me then, Irina. Thank you, yes. Okay, let’s hopefully come back to June and Objective One a little later, when the technical issue is resolved. So let’s go to Objective Two, expanding the digital economy, inclusion, and benefits for all. So our speaker to describe the coalitions in this cluster is Dr. Mohammed Shabbir, who is a member of Internet Society’s Accessibility Special Interest Group. He is also a member of the Pakistan ISOC chapter, and a member of the Digital Coalition on Accessibility and Disability, DCAD. So Dr. Shabbir, can I hand over to you to describe the cluster of dynamic coalitions who have signaled that they are most interested in objective 2 and contributing to that objective.

Muhammad Shabbir : Thank you. Yes, thank you very much and good morning to everyone. It is a privilege to address this session where we gather as representatives of the IGF’s diverse and dynamic community to align our collective contributions with global digital compact. As part of this significant milestone in the global digital governance, our work announces and aligns with the global digital compact and indicates inclusion, accessibility, and sustainability in the digital economy. The specific objective I am addressing today, expanding digital economy inclusion and benefits for all, is foundational to achieving this vision. The GDC demands that all stakeholders work collaboratively to dismantle barriers, promote digital equity, and harness technology to empower individuals and communities. I am honored to represent four of the dynamic coalitions in our community as displayed and these dynamic coalitions collectively show that if we want we can achieve anything in the digital arena. Together these coalitions exemplify how collaborations and shared experiences strengthen the IGF ecosystem and drive the implementation of GDC. Each coalition operates with a shared ethos of inclusivity, equity and sustainability, building bridges between stakeholders and addressing the critical challenges in the way of digital development. As we all know access to digital financial services is cornerstone of economic inclusivity. We have identified key barriers such as improbability, issues, restrictive policies and gaps in the digital and financial literacy to harness access to equitable, affordable and effective digital financial services. Knowledge is a public good, and it is committed to making it universal and accessible. By promoting openly licensed content as a foundational cornerstone of digital inclusion, it contributes to advancing the goal of GDC and its principle of inclusivity, accessibility, and innovation. Open licenses are something that in educational context provide access to different linguistic and other cultures to access to the information. And by equipping individuals with the skill to drive in the digital economy, the DC is transforming education into a catalyst for sustainable development. In any of the digital inclusion services, it is required that persons with disabilities can also fully participate. The Coalition has been at the forefront of advancing for accessibility in the digital platforms and policies. The vision of the Coalition is clear where each and every person, regardless of the ability, has the access to the information and digital content. This year the Coalition is revising its accessibility guidelines for IGF meetings which can also be used by other organizations. We also have four fellows persons with lived experience of disability in these corridors participating inclusively. Environmental exclusion Sustainability is not just an environmental imperative. It is an economic opportunity and the coalition advances for the interoperability of green policies into digital governance. We all know that the work of these coalitions may seem that as if they are operating in silos, but they work together when their work is collectively seen as a comprehensive, cohesive unit, they say that economy is driven by education, includes persons with disabilities, which requires green policies. And collectively, collaboratively, we can achieve the goals of Global Digital Compact and the Vision 2030. Thank you very much.

Mark Carvell: Thank you very much. Yes, a round of applause for Dr. Shabbir. Thank you very much. All the four coalitions are doing incredibly important work and really addressing some of the crucial challenges as well as opportunities for communities which may be marginalized or disadvantaged. And I wonder, just one question, no doubt those four coalitions are identifying common barriers to marginalized, for marginalized communities from participating in the digital economy. Do you think there is scope for particularly perhaps through collaboration amongst those coalitions to address and resolve those barriers? What do you think? Thank you.

Muhammad Shabbir : Yes, thank you very much. And that’s a really crucial question where the work of one DC enters into the domain of other DC and how they collaborate with one another. If I give just the example of these four coalitions where we have been collaborating, which is for the rights of persons with disabilities has conducted webinars and sessions with the Digital DC on Financial Inclusion to make banking systems accessible, financial systems accessible for people with disabilities. Similarly, today after this session, two coalitions are joining together in a workshop where we will be discussing how digital accessibility could be ensured for everyone in education and how this interacts with persons with disabilities and without disabilities. Similarly, the coalition on Environment, we cannot forget environment is something that we all work towards and we are living where we will be impacted if we don’t care of it. So while we try to work in our individual domains, we try to inculcate steps where the policies and achievements, they are sustainable and green.

Mark Carvell: Thank you, Dr. Shabir. That’s a great example of working together on common objectives that are directly relevant to the compact. So I’m sure this may well come up again in our discussion with people taking. part today. Okay, with that I’ll hand back to you, Irina, for Objective 1, maybe, fingers crossed.

Irina Soeffky: Yeah, I think so too, and I’m checking with June Paris whether she’s hearing us.

June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging digital divides. Yeah, I am June Paris, I am in Barbados at the moment, and it’s 2am, 2.26 in the morning. So anyway, I will speak quickly and I will read the introduction for SIDS. So I will start straight away. While many in the global internet community, especially those interested in issues surrounding internet governance, are fully engaged with and attuned to the developments surrounding WCIT, WTSA, and ICANN, and the challenges and opportunities brought about by emerging issues such as cloud computing, social media, and mobile technology, it is becoming increasingly apparent that a greater degree of polarization and marginalization in the area of internet policy and strategy has been slowly occurring. SIDS was found in the Caribbean, Pacific, and AIMS, AIMS, which is Africa, Indian Ocean, Mediterranean, and the South China Sea regions. Small island developing states, which is SIDS, which number about 52 at the last count, and which comprise approximately 60 million people, are seeking a greater voice with a higher level of volume in the international discourse, especially that relating to information and communication technology and critical resource management. According to various reports and documents published by the United Nations and other international organizations, the SIDS share several common sustainable development challenges. Small populations as low as 2,000 in one state, limited resources, remoteness, susceptibility to natural disasters. They’re vulnerable to external economic shocks, excessive dependence on international trade and extractive industries. Indeed, internal economies of many states are characterized by state monopolies, effective monopolies by NNCs, and oligopolies, which often lead to price distortions for key goods and services. In the ICT sector, especially telecommunications sector, voice and data operators are most likely to be monopolists, or oligopolies, sorry, I’m having trouble with that word, with attendant issues relating to non-competitive pricing, low levels of customer service, aging infrastructure, a lack of universal accessibility with digital inclusion and digital divide scenarios, often playing out to disadvantage of one or more sectors of the population. I have experience of that, so I can say, yes, this is what’s really happening. Further face is on a daily basis with severe environmental, energy, and natural resource management challenges. The states are hard-pressed to take full advantage of the potential in territory benefits and opportunities made available through emerging technology, such as cloud computing and on-demand type ICT services. Given the tremendous amount of consumption of energy, capital, and natural resources, that on-demand facilities of this nature demand. In this regard, and with a view to ensuring that these issues are properly ventilated amidst the debates among OECD, G20, and BRICS countries that relate to internet and ICT policy and strategy, telecommunication standards and tariffs, universal access, and sustainable development funding approaches. It is obvious that the number and volume of SIDS voices must be elevated in the design, planning, and participation and collaborative activities with their larger colleagues in order to better align and contextualize policies, positions, and strategies. If I’ve got time, I will go on a bit. Although there are shared experiences and multiple synergies among the SIDS, it is not by any means an easy task to simply organize and facilitate this intention to raise the volume. Logistically, it is near impossible to treat with the needs of 52 countries and 60 million voices spanning thousands of miles of oceans across the globe through a single or even a series of position papers or a solitary conference session. The needs and requirements of the SIDS deserve more, a forum through which international community can hear their concerns and challenges, a forum through which SIDS can sit together and collaborate to themselves, define and offer their own possible solutions to their own problems, a forum in which exchanges of opinions, views, and possible solutions can be achieved on a level and equitable playing field. It is, therefore, incumbent upon the Internet Governance Forum and, indeed, the wider WISISPAN process to provide a dedicated space forum for the SIDS to dialogue, firstly amongst themselves. and then with wider global community on a broader range of issues relating to and affecting Internet policy, modernization of critical Internet, infrastructure resources, the economies of telecommunications service provision, telecommunications service pricing, and the relationship to sustainable development and development funding, quality of service, and quality of customer service practices, all of which take full consideration the unique vulnerabilities and environmental sensitivities of these small island nations. Therefore, a dedicated and ongoing Internet governance space besets cuts across all the world’s major geographical regions and will provide a useful example for not only multistakeholderism, but also South-South multilateralization and indeed cooperation. As small island developing states face the greatest risks and challenges due to the global economic downturn, double debt recession, and the Eurozone crisis, there’s no better time than now to forge and harden this relationship, this partnership, and for the United Nations, the Internet Society, the International Telecommunications Union, and the other organizations to recognize and support this quantum leap forward. So I will end there for now. If there’s any questions, I will go on to those questions when the other speakers have spoken.

Irina Soeffky: Thank you so much, June, for this broad and very rich presentation. I think as we are already running a little bit late, I will also keep my questions for later for the exchange session and then we can get into discussions later on. I’m sure there will be contributions to that because it’s such a fundamental topic that you’ve been talking about. But I think to really to get us moving on, I’ll go to my next presenter. which is Olivier Crepin-Leblond. It’s a pleasure to have you here. You are a founder, a co-founder, and an investor of WAF lifestyle app. You describe yourself as a connector, bringing people together to achieve great things. And I find particularly impressive that you are dealing with all things internet already since 1988. So you’ve been around forever, so to speak, experienced it all. Therefore, we are very much looking forward to you talking today about how dynamic coalitions can contribute to the GDC objective of fostering a safe, secure, and inclusive digital space that upholds human rights. Olivier, over to you.

Olivier Crepin-Leblond: Thank you very much, Irina. And you know, it feels like yesterday, 1988, the start of this whole craziness of internet and so many people coming online and so on. But anyway, I have about four minutes to talk to you about five dynamic coalitions. So I hope that you’re holding onto your seats because this is going to be rather quick. Anyway, the first one is the Internet Rights and Principles Coalition. And that’s been around for quite some time, actually. And it works to uphold human rights in the online environment and to root internet governance processes and systems in human rights standards. So it does work like raise awareness of fundamental human rights on the internet, establish global public policy principles for an open internet with stakeholder involvement, encourage stakeholders to address human and civil rights in policymaking, applying human rights to the internet and ICTs, and assessing existing structures and guidelines, protecting and enforcing human rights online. That’s just a task by itself. Promoting people-centric, that’s important, people-centric public interest in internet governance, and defining the duties and responsibilities of internet users and stakeholders to preserve the public interest online. It has produced. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. This is a charter of human rights and principles back in 2011 with 21 articles and ten principles and it’s really recommended reading. You can find it online. Very, very good work. And its contribution to the SDGs have really all come from that charter. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. At least one-third of Internet users are under 18. I wasn’t aware of that. It’s quite amazing. And their rights must be protected as outlined in the U.N. convention on the rights of the child. With the advent of AI and virtual reality, policies must prioritize children’s involvement in shaping these issues and to ensure this strategy has begun to have a real counter to the problematic, intrinsic experiences of им latent childhoods. The Global Digital Compact calls for national child safety priorities by 2030. And child rights impact assessments should guide legislative policy and make sure that children are protected from the threat of cyberattacks. And the International Coalition on Internet Governance has really worked on that. And they’re bringing young people, young professionals in internet governance, particularly at the annual IGF, and you will have seen quite a number of people from this coalition at this IGF. And it serves as a natural space for youth engagement on the issues that we’ve already spoken about. So thank you very much. on amplifying youth voices in digital governance, which we’ve heard also with the previous D.C. is probably not strong enough. Empowering youth through digital literacy. Closing the digital divide because it’s not only geographical, but there’s also a digital divide with as far as age is concerned. Prioritizing youth safety online, a huge issue as we are all aware of. Leveraging technologies for social good and holding platforms accountable, which I think is quite a task for them. And all of these efforts aim to foster a more inclusive and equitable digital landscape for young people. Moving on, on safety, there is an Internet Standard Security and Safety Coalition, the IS3C, very active as well. And their mission, aligning with objective three of the Global Digital Compact, is to enhance online security through effective deployment of security standards and best practices. Good practice, I think some would call it. After extensive research and analysis, the coalition has developed policy recommendations, guidelines, and toolkits for global dissemination and identified best practices for capacity building in three key areas. Evolution of secure by design technologies is one of them. The second one is addressing cybersecurity skills gaps and educational curricula. Huge topic as well. And third, strengthening public sector procurement practice as a driver for security standards implementation. Security is a huge topic, as you know, on the Internet. And the IS3C is considering a new work stream on consumer protection and digital trust for 2025. And to this end, consumer organizations are being consulted on a project proposal to assess whether the tech industry effectively addresses the security concerns of personal and business Internet users in digital products and service design. And the proposal was actually introduced at this IGF. So, if you’ve missed it, you probably have to watch the recording. I would recommend you watch the recording. And I’ll finally finish with the last one, which is the D.C. that I chair, which is the Dynamic Coalition on Core Internet Values. And that promotes Internet governance principles, engaging with diverse stakeholders, it ensures the Internet, or tries to ensure the Internet remains a global public resource through policy recommendations, collaborations, fora, all advocating for a free, secure, and resilient Internet for all. Now, you might wonder, what are those core Internet values? And I’ll just list them, because we could probably talk about those for another hour or so. But global, the Internet is global, obviously, interoperable, open, decentralized, end-to-end, people from one end to the other, user-centric, I think we’ve mentioned it with another D.C. earlier on, it’s really important, robust and reliable, yeah, the Internet is pretty darn robust, and finally, secure, which is one that we had to add, that originally wasn’t really thought of, because the Internet, everyone knew each other, but these days, you have to make sure that it is secure. So together, the work of all of these Dynamic Coalitions fulfills the objective three of the G.D.C., to the letter. And do you remember the objective? Okay, I’ll remind you, fostering a safe, secure, and inclusive space that upholds human rights. And that’s what they all work on.

Irina Soeffky: Yes, thank you for your applause. And thank you, Olivier, very impressive work there as well. As we are still running a little late, I will not use my prerogative here to ask questions first, but save my questions for later on, when we will get into a discussion. And with that, hand it over to Marc for our next guest.

Marc: Thank you, Irina. Yes, let’s go straight on to objective… We are pleased to be joined by Tatevik Grigoryan, who is a member of the Dynamic Coalition on Interoperability. She is the co-chair of the Subjective IV, Advancing Responsible, Equitable and Interoperable Data Governance. And we have the pleasure of being joined to present the cluster of coalitions relating to the Subjective by Tatevik Grigoryan. Tatevik Grigoryan is the co-chair of the Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators. So we have, I think, on the screen, yes, there are three coalitions in this cluster. So over to you, Tatevik, please.

Tatevik Grogryan: I would like to start by saying that we have a number of stakeholders in this cluster, the first one of which each one of them contribute to this objective through a different type of work and focus, and as objective for advocates for robust governance frameworks that are transparent, accountable, and inclusive, involving multiple stakeholders such as governments, private sector entities, and civil government so that’s the first thing that we saw in the coalition. And, of course, the first dynamic coalition that is on digital access from the private sector. We had an important ön on the digital surplus. Differently than just the multi So, I would like to start with a brief overview of the project. The project is called Sustainable Economy that focuses on the economic dimension of the Objective 4, highlighting how equitable access to data and digital tools can reduce inequalities and enable sustainable growth. For example, through initiatives like the project CREATE, the coalition illustrates how the goal of the project is to address the challenges of digitalization and digitalization, and this is a call for equitable data governance that advances inclusion and economic empowerment. So, the next dynamic coalition that is contributing to this objective is the DC on Data and Artificial Intelligence Governance, which focuses on fostering critical discussions on data and AI and AI, and also on the importance of data governance in the context of digitalization and ensuring that the voices of underrepresented populations, particularly from the global south, are integrated into governance frameworks. The coalition promotes collective studies and multistakeholder engagement to evaluate evidence and propose policy updates that reflect the realities and aspiration of stakeholders globally. The next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the objective 4, actually, but to objective 1 and to other aspects of the GDC through its Internet universality Indicators, which is a comprehensive tool. So, the next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the objective 4, actually, but to other aspects of the GDC through its Internet universality Indicators, which help to evaluate evidence and propose policy updates that reflect the realities and aspiration of stakeholders globally. The next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the to everyone and everyone and everybody wants to be safe around the world. These are a couple of the professionals who address the emerging digital issues. This tool advocates for Internet that is universal and based on principles of human rights, openness, accessibility and governed by the largest population of Western Europe, i.e. the United States, makes use of human security online, environmental aspect of the Internet, and AI among others. So this is a framework which as the objective for underscores the data-driven policymaking and policymaking of the data-driven policymaking, and I think it’s a very important opportunity to ensure and to support diverse stakeholders from government to civil society and private sector to contribute to data-driven policymaking, which is based on evidence. I’ll perhaps stop here.

Mark Carvell: Thank you very much. And I think it’s a very good overview of the data governance agenda. And congratulations on the updating and publication of the toolkit, the indicators toolkit, and the cross-cutting issues and, as you say, how it relates specifically to responsible data governance. It’s very impressive. Maybe there will be questions, comments about that from our participants here today.

Irina Soeffky: Thank you very much for coming. He’s a research fellow for innovation and entrepreneurship. He’s a real computer scientist, which I always find very impressive. I’m a lawyer myself, so I don’t know nothing about all that in theory and practice. He has also a master in management and interaction design, something that I find particularly interesting, but he will not talk about that, unfortunately, today. But he’ll concentrate on the contributions of dynamic coalitions to a topic that really everybody seems to be talking about at the moment. And this is enhancing global AI governance for humanity’s benefit.

Yao Amevi Amnessinou Sossou: Please, Yao, over to you. Thank you, Irina, for these introductions. Yes, and I will try to be very short as possible, because I think other colleagues also from the DC are in the room, and they will contribute and expand more on what I will be explaining about the way this for DCs. So the dynamic coalition, we also, like the other previous speakers, we in charge of expanding on the work of four dynamic coalitions that have been contributing in the unique perspective and approach in addressing key issues with the Internet governance space. So first on, regarding the dynamic coalition in gender and in IGF, we know today that the voice of women, there have been progress in the voices of women in this space, but still voices of women are still underrepresented, gender minorities, social minorities, especially in the global stuff, are still being underrepresented, and the work of the DC on gender here is making sure that they change this perspective by working with the feminists and also the intersections of coalition, different perspectives into discussion on privacy, AI access, and freedom expression for these underrepresented minority women. with theWhich is also about a receiving social discourse among 56 Seminarians with Specific Areas of Women Genders in those contributing and meaningfully impacting discussion. and their contribution highlight the importants of bringing women in the discussion and discussion with the organization impacts social discourse among women. will get into themephase of impact and will get into the also be important to味 nurture and develop the technological ability and development of the IoT devices. And also, we prioritize safety and security from the design and all the different life cycles of these IoT devices. And their work in general highlights the importance of responsible governance in shaping the future and the benefits of those devices that we are using so that we all benefit more efficiently for these devices. Five minutes after the panel. but they develop this innovation, I think they are doing very incredible work in the space as well. up the work of the DSTC, I will say that they work on the intersections of AI and investment because they are managing an industry like the hospital and hospital health industry. there is the emphasis on the robust, and the need for robust regulations ensuring that AI in healthcare is ethical and transparent and impactful. And they are building capacity, as I mentioned, to help stakeholder in this space navigate the complexity of digital health innovations in that, and making sure that these innovations are responsible, and they are responsibly managed as well. Last, but not the least, is also the DC that I’m part of, the Data-Driven and Health Technology DC, where we’re bringing a bottom-up approach in discussion shaping the IGF. And our focus here is to, in this bottom-up approach, we want to bring up the perspective of the individuals into discussion. Well, we believe that while giving the space to the public to voice their concern, we are shaping and improving health contribution in the space of health and technology. By bringing this up front, we are, in the DC, health data-driven technology is actually contributing to the SDGs in healthcare and wellness. And all this, not spending more on detailing because of the time that we have left, I will just summarize that you have already noticed that these different DCs are focusing on key different areas in the global internet governance space. But their work, individually taken, when you analyze it, they are actually contributing to the global spectrum in tackling the issue of AI. and governance in general, if I can say so. So I will give it the floor to you back, Irina.

Irina Soeffky: Thank you very much. Perfect. Thank you so much and indeed couldn’t agree more. Thank you for your applause. Wonderful. And with that I think we’re at the end of, as you named it, the download session which I think was incredibly rich and incredibly interesting and I’m really so impressed of the diverse work that is ongoing. But I think with that we are at the core of this session that is interaction and discussion and we get ready everybody here in the room and online. So we hope for your questions, comments, thoughts, ideas, whatever. It’s open for everything. But while you get ready to prepare your contributions I’ll turn to Mark whether you have anything to add at the moment or whether we should hand it over to our participants here and online.

Mark Carvell: I think we should go straight to a discussion involving everybody here today and online. But just a word to say thanks very much to our colleagues for summarizing a vast amount of work in such an effective and comprehensive way. Many thanks indeed. Okay, so there are also representatives of dynamic coalitions in the room. So if anybody has a really specific question relating to a dynamic coalition that’s not represented by any of our colleagues of speakers here, there will be somebody in the room possibly who can pick it up or at least we can take it away. And if you want to put a question now in the room it’s a matter of going up to the podium over there with the microphone I understand is the arrangement. There is a core majority of dynamic coalitions ready to engage with the GDC process and also potentially with the WSIS plus 20 review. We haven’t touched on that. Our focus really is on the GDC and sustainable development. So what do you think? Are you assured that dynamic coalitions do have an important role to play within the IGF ecosystem and contributing to not only advancing the IGF’s role in the follow-up to the compact, but also the wider diversity of specific issues covered by all these five objectives? So who wants to take first question? I see somebody coming up to the podium. Please introduce yourself briefly and then put your question or comment, suggestion or whatever it is. Okay, thank you.

Audience: Good morning, this is Maarten Botterman. I’m also representing the Dynamic Coalition on the Internet of Things. What strikes me is that by outset, this dynamic coalition approach is one of pulling stakeholders together to discuss issues throughout the year and coming up particularly on the IGF to demonstrate what they think should be the way forward together, taking into account what happened today and what needs to be in the future. So from that perspective, I think as dynamic coalitions, we could contribute the best by formulating our view on our focus perspective for the specific dynamic coalition on how global good practice looks like, taking into account where we are, what our values are and how we go forward. So, I look forward to hear how others feel about this concept as maybe even promoting this to one of the default products, outcomes of dynamic coalitions each year.

Mark Carvell: Thank you, Martin. Who wants to react or comment or add to that very valuable intervention? Thank you very much, Martin. I see, I think it’s Wout de Natris walking up to the mic. And then I’ll check with Jutta if there’s somebody online. Okay, after Wout, and then to you, Jutta. Thank you.

Audience: My name is Wout de Natris. In the past, I was present at two sessions around the GDC and people from New York from the GDC. We have a gigantic opportunity as dynamic coalitions to be proactive. We can set the agenda perhaps a little bit, and if we wait, someone else will set the agenda for us. So, I would suggest that we set the agenda. If we have that, we can do it. We can set the agenda for what we think the implications are to be delivered in Oslo in June 2025. If we have that, we can communicate that to the people in New York on the global digital compact saying this is what we are going to contribute. And if we do that, we set the agenda, and we set the topics, at least in part. We can do that, and we can do that, and we can do that. And we can do that, and we can do that, and we can do that. So, we can do that, and from there, do our work and make sure that we have output in June. I promise you, as ISTC, we will have on quantum cryptography, for example, what’s the state in the world at this moment. We’re going to research that in the coming six months and report on it.

Mark Carvell: So, thank you for that, but I think that it’s an opportunity for all of us to look ahead to the next IGF, and that’s a very practical proposal there, to look ahead to the next IGF in Lillestrøm, in Norway, and work towards that. So I think that’s it for our discussion today, and taking into account all the relevant work we’ve been describing here that’s undertaken by the Dynamic Coalitions. So shall I turn to Xiao now to provide any, relay any comments online in relation to our discussion? Xiao, over to you. Thank you.

Online moderator: Yeah, thank you. I hope we can hear you well, and greetings from Portugal. And we already have a couple of questions here in the chat, but I’ll start with one that’s from Siva Subramanian. I hope I pronounced it correctly. And it’s directed to Tatevik, but I guess that all speakers can intervene on this question, which was a specific question on the data-driven governance model that was mentioned specifically in the context of AI. I believe the audience wanted to hear a bit more about it and about the project that was presented by Tatevik. So if you can provide more insights, I guess that the audience would be happy to hear more about this contribution to the GDC from this particular Dynamic Coalition.

Mark Carvell: Tatevik, did you want to pick that up, or did you want to go back to the question? There was a bit of an echo around here, so it’s a bit hard to… Ah, you couldn’t quite hear.

Online moderator: Xiao, could you just repeat it? Sure. So the question is quite broad, so I’m sure that others can also pick it up, but I’ll read it as it is. So what is the data-driven governance that the speaker talked about, specifically in the context of AI?

Mark Carvell: Okay. Tatevik, can you pick that up? The relation to AI, I think, is the key element, yeah? If anybody from the other Dynamic Coalitions would like to intervene.

Tatevik Grigoryan: So, I would like to talk a little bit about what is AI and how it supports data-driven policy-making. So, as I mentioned, I could talk about the Internet universality indicators and how it supports this. I work on the Internet universality indicators, so, the way it supports the data, data-driven, it supports data-driven policy-making, rather, it does have indicators on AI currently, so, there is also research on precise data-driven policy-makers, so data-driven policy-makers can um, write to privacy, for example. And data governance as well, as part of one of its themes. But, the approach here for IUI format indicators is to collect the data and help the governments formulate policy recommendations, including policy recommendations, and also to provide the government with the necessary data and opportunities in the national context, and inform the policies through the national data that is covered, that is collected and processed. So, this is the approach for the IUI ROMEX indicator, which is also a multistakeholder approach. So, the IUI ROMEX is a multistakeholder approach, and it is a multistakeholder advisory board composed of governments, civil society organizations, private sector, academia, and diverse stakeholders to also ensure that the data we collect is supported and at a later stage validated by the representatives of the stakeholder groups to ensure, to ensure that the policies we exercise are supported with accountability, curiosity, and relevance to the content involved in all of the data-released policy This is how the framework and hence our dynamic coalition supports it, for other dynamic coalitions.

Mark Carvell: Thank you. Thank you, Tetevic. That’s very helpful. Shall we just go quickly back to Zhao for any second point online, and then we’ll go back to the audio participants here in the room. Sorry, Olivier, you wanted to comment.

Olivier Crepin-Leblond: Just to jump in, I think this question was specifically regarding the Dynamic Coalition on Data and Artificial Intelligence Governance. Luca Belli is the chair of that coalition, and he is present at the IGF, so if anybody is interested in those issues, I would suggest you go and find him and ask him the question. They have elaborated and published a report in 2024, I can see here, with 44 authors and 25 analysis of this specific topic. That’s plenty of reading.

Mark Carvell: Okay, Zhao, do we have a second point online, bearing in mind the time we’ve got?

Online moderator: I’m happy to read one directed at the old panel. It’s from Luis Martinez, asking what is the view of the panel regarding how GDC is going to benefit the work of the Dynamic Coalitions?

Mark Carvell: How is the GDC going to benefit the work of the coalitions? Who would like to take that on? I mean, it’ll certainly enhance the profile of Dynamic Coalitions in the IGF ecosystem, and the model that the coalitions provide for year-round activity of the IGF. Olivier, do you want to come in?

Olivier Crepin-Leblond: Yeah, thank you, Marc. I was going to say something along the lines, ask not what the GDC can do for you, ask what you can do. do for the GDC. And in effect I think that’s what we’re, that’s what the coalitions are working on.

Mark Carvell: Very succinct, great. Okay, well anybody in the room want to raise a new angle here? Yep, do you want to go head over to the mic? That one over there maybe it’ll work. Yes, thank you. And say who you are briefly.

Audience: Thank you. I am Dr. Rajendra Pratap Gupta and I lead three dynamic coalitions, Digital Economy, Digital Health and Environment. And I congratulate all my co-chairs and co-leads for dynamic coalitions for the wonderful work they do. I want to again emphasize there are three things that we have to look at the world when it comes to internet technologies. We should look for creating jobs, livelihood for all and internet for all. I think we are still 2.6 billion people not connected to the internet. As dynamic coalitions we have to decide. I think we have indicators. So we should look at first indicators. How much did we impact in terms of livelihood creation? How much did we go to connect the unconnected? And I think there’s a very important question that was raised on data and AI. So I think data has two big issues. Integrity of data and integration of data. And we should not prioritize intelligence over it. So I think data integrity integration should come before artificial intelligence. And last but not least a very positive change. This time I don’t see big tech in this. Something that I’ve always emphasized as IGF we are the most impactful forum for internet technologies. We should aim for large number of small companies rather than small number of large companies to create the jobs we want and to shape the internet we want which is democratic. Thank you so much.

Mark Carvell: Thank you very much. We’re getting close to the end.

Audience: So, I’m Reyansh Gupta, I’m leading the Dynamic Coalition on Gaming for Purpose. So I just want to add something to the discussions that we’re having. Dynamic coalitions are doing great work across sectors, but gaming acts as a universal language and I think gaming can be integrated everywhere and I feel like there are multiple sectors and multiple areas where gamification and gaming can just contribute to how we’re contributing to the social good itself and I think that’s just what I wanted to add to it. Thank you.

Mark Carvell: Thanks very much. One more. Yes.

Audience: Hi. Good morning everyone. Thank you for your efforts for this great platform of IGF. I have made this intervention in previous sessions and I would like to emphasize the importance of taking intellectual property rights into consideration in the digital environment and I am pleased to, in behalf of my organization, Creators Union of Arab, to take this part on the coalitions and we take this opportunity in IGF this year to announce about launching a platform for protecting intellectual property in the digital area. It’s called intellectual property verification. So it’s our pleas to contribute with you this part of intellectual property rights. Thank you.

Mark Carvell: Thanks very much and best of luck with the launch of that platform. That’s great news, I’m sure. Great contribution, potentially. Okay. We better wrap up. So I think I’m going to look to Jutta and maybe Markus, who is in the room, do you want to say anything? No? Shaking hands? Okay. All right. Markus is our coordinator. And I think that’s a very good point. And I think that’s a very good point. I think that’s a very good point. Anyway, Jutta, do you want to help us wrap up today? Thank you.

Jutta Croll: Does it work? Okay. I tried to take notes of all this important content and comments that we’ve heard. And I think that’s a very good point. And I think that’s a very good point. But the GDC to life is what we have to do now. And we’ve learned today that dynamic coalitions, the 31 dynamic coalitions are already working to implement the GDC and the five main objectives. Let me just refer to some of the very important things we’ve heard. We were talking about a shared responsibility for marginalized groups, for disadvantaged groups, and also for those whose voices have not been heard so far enough. We’ve been talking about overcoming barriers, and that is also done in the collaboration of dynamic coalitions. We’ve heard that the Internet is the young people’s network, and that the GDC asks for child rights as a priority. And we’ve heard that the GDC is very active in the work of the dynamic coalitions. And we have heard about the importance of the Internet assessments in all the work that we are doing, because children are the inhabitants of the Internet world, of the digital environment. And therefore, it is very important to follow up with that within the work of the dynamic coalitions. We also have heard about the data and the importance of data in the work of the dynamic coalitions, and also in the work of the dynamic coalitions, and also in the work of the work that the dynamic coalitions do. It gives us orientation where we can, where we should go for. We’ve heard a lot about safety and security, and about the standards we need for safety and security to make our work effective. Eventually, we’ve also heard about the importance of the dynamic coalitions, and I would like to refer to the dynamic coalitions as a platform for digital services. I have had the opportunity to take that chance, and it’s very data-driven, what they are doing there. I would also like to refer the dynamic coalition on data-driven health technologies and on digital health to look at your Saudi Arabian colleagues, and I take this opportunity to thank them. We have a dynamic coalition, and we have a dynamic coalition, and we have a dynamic coalition starting with the very beginning, 19 years ago, with the first internet governance forum. So we have many dynamic coalitions that are dating back to 2007, starting 2006, starting their work then, and it’s an open concept, so it’s bottom-up. Everybody who wants to join the work of dynamic coalitions, just go to the global digital compact website, and it’s open, and the mailing lists are open. You can go there, and you can join us in our work to implement the global digital compact. Thank you so much.

Irina Soeffky: Thank you, Jutta. This is indeed also my personal takeaway that we need more work and more impact of the Dynamic Coalitions being at the implementation of the GDC or the future work of the IGF. So this has been incredibly inspiring today. So I have lots of ideas to bring back home and to think more about and to discuss more in detail. So thank you, everybody. And with that, over to you, Mark.

Mark Carvell: Great. Well, what can I add to that? I think it’s been a great session. I hope people who perhaps aren’t aware of Dynamic Coalitions have learned a lot today about the scope and range of activities and their direct relevance to the objectives of the Compact and to the Sustainable Development Agenda, the goals, the SDGs. And as I said earlier, when I touched on the WSIS Plus 20 Review, and I have a shout out there for the Dynamic Coalition on Environment. We heard in one of the sessions today that environmental issues will be a major element, a major component of the WSIS Plus 20 Review, and we have a Dynamic Coalition comprising experts in that very field. So that’s the core message, I think, today. If you go to the IGF website, you will see the list of Dynamic Coalitions. You just go to the menu and then scroll down, and then you can hit on every coalition, maybe the ones you’re particularly interested in, and you can find out a whole lot more about them, what their missions are, their action plans, what they are delivering in terms of tangible outcomes, and how they are engaging in wider processes, such as those of the UN, that are now providing the context for all our work over the year ahead. So let’s all work together on that. And I think it’s important for us to be able to do that, and to be able to ensure that dynamic coalitions fulfill that potential. Okay. I better stop there, because we are over time, I guess, and I won’t ramble on anymore. So I’ll conclude that with thanks to everybody who contributed, all the planners of the session, the technical support team here in Riyadh, the hosts who have done such a fantastic job this morning, and thanks to our panellists for their diligence and work at the hard-pressed time to capture key elements of all the dynamic coalitions that have stepped forward in the context of the GDC. So many thanks again to all of you. Okay. I’ll stop there. Thank you.

O

Online participant

Speech speed

118 words per minute

Speech length

892 words

Speech time

452 seconds

Small Island Developing States face unique challenges in digital development

Explanation

Small Island Developing States (SIDS) encounter specific obstacles in their digital development due to their unique characteristics. These challenges include limited resources, remoteness, and vulnerability to external economic shocks and natural disasters.

Evidence

The speaker mentions that SIDS have small populations, limited resources, and are susceptible to natural disasters and external economic shocks.

Major Discussion Point

Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives

M

Muhammad Shabbir

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits

Explanation

Various dynamic coalitions work together to promote inclusivity and accessibility in the digital economy. These coalitions focus on different aspects such as financial inclusion, open education, accessibility for persons with disabilities, and environmental sustainability.

Evidence

The speaker mentions specific coalitions: DCDFI (Digital Coalition on Digital Financial Inclusion), DCOER (Dynamic Coalition on Open Educational Resources), DCAD (Dynamic Coalition on Accessibility and Disability), and DCE (Dynamic Coalition on Environment).

Major Discussion Point

Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives

Agreed with

Olivier Crepin Leblond

Tatevik Grigoryan

Yao Amevi Amnessinou Sossou

Jutta Croll

Agreed on

Dynamic coalitions contribute to implementing GDC objectives

There is an opportunity for coalitions to collaborate and address common barriers

Explanation

Dynamic coalitions have the potential to work together on shared challenges and objectives. By collaborating, coalitions can more effectively address common barriers and achieve greater impact in their respective areas of focus.

Evidence

The speaker provides examples of collaboration between DCAD and DCDFI on making banking systems accessible, and between DCAD and DCOER on ensuring digital accessibility in education.

Major Discussion Point

Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation

O

Olivier Crepin Leblond

Speech speed

155 words per minute

Speech length

1125 words

Speech time

433 seconds

Coalitions on internet rights, children’s rights, youth engagement, and internet standards contribute to fostering safe and inclusive digital spaces

Explanation

Several dynamic coalitions work towards creating a safe and inclusive digital environment. These coalitions focus on various aspects such as internet rights, children’s rights, youth engagement, and internet security standards.

Evidence

The speaker mentions specific coalitions: Internet Rights and Principles Coalition, Dynamic Coalition on Children’s Rights in the Digital Environment, Youth Coalition on Internet Governance, and Internet Standard Security and Safety Coalition (IS3C).

Major Discussion Point

Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives

Agreed with

Muhammad Shabbir

Tatevik Grigoryan

Yao Amevi Amnessinou Sossou

Jutta Croll

Agreed on

Dynamic coalitions contribute to implementing GDC objectives

T

Tatevik Grigoryan

Speech speed

160 words per minute

Speech length

898 words

Speech time

335 seconds

Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance

Explanation

Dynamic coalitions focusing on data governance, AI, and internet universality indicators play a crucial role in promoting responsible data governance. These coalitions work on developing frameworks and tools for assessing and improving data governance practices.

Evidence

The speaker mentions the Internet Universality Indicators as a comprehensive tool for evaluating internet governance and policy-making.

Major Discussion Point

Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives

Agreed with

Muhammad Shabbir

Olivier Crepin Leblond

Yao Amevi Amnessinou Sossou

Jutta Croll

Agreed on

Dynamic coalitions contribute to implementing GDC objectives

Coalitions can contribute to data-driven policymaking and national assessments

Explanation

Dynamic coalitions play a role in promoting data-driven policy-making and conducting national assessments. Their work helps inform government policies and provides valuable data for decision-making processes.

Evidence

The speaker mentions the Internet Universality Indicators framework as a tool for collecting data and helping governments formulate policy recommendations.

Major Discussion Point

Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation

Y

Yao Amevi Amnessinou Sossou

Speech speed

133 words per minute

Speech length

627 words

Speech time

281 seconds

Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance

Explanation

Various dynamic coalitions work on different aspects that contribute to enhancing AI governance. These coalitions focus on gender issues, Internet of Things, blockchain technology, and digital health, all of which intersect with AI governance.

Evidence

The speaker mentions specific coalitions working on gender, IoT, blockchain, and digital health, highlighting their contributions to AI governance discussions.

Major Discussion Point

Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives

Agreed with

Muhammad Shabbir

Olivier Crepin Leblond

Tatevik Grigoryan

Jutta Croll

Agreed on

Dynamic coalitions contribute to implementing GDC objectives

A

Audience

Speech speed

164 words per minute

Speech length

1176 words

Speech time

429 seconds

Dynamic coalitions can proactively set the agenda for GDC implementation

Explanation

Dynamic coalitions have the opportunity to take a proactive role in shaping the implementation of the Global Digital Compact. By setting the agenda, coalitions can ensure their expertise and priorities are reflected in the GDC implementation process.

Evidence

The speaker suggests that coalitions should communicate their planned contributions to the people in New York working on the Global Digital Compact.

Major Discussion Point

Major Discussion Point 2: Role and Impact of Dynamic Coalitions

Coalitions should focus on creating jobs, connecting the unconnected, and data integrity

Explanation

Dynamic coalitions should prioritize three key areas in their work: job creation, expanding internet access to the unconnected population, and ensuring data integrity. These focus areas are crucial for addressing global digital challenges.

Evidence

The speaker mentions that 2.6 billion people are still not connected to the internet and emphasizes the importance of data integrity and integration.

Major Discussion Point

Major Discussion Point 2: Role and Impact of Dynamic Coalitions

Gaming and gamification can be integrated across sectors to contribute to social good

Explanation

Gaming and gamification techniques can be applied across various sectors to promote social good. This approach can enhance engagement and effectiveness in addressing social issues through digital means.

Evidence

The speaker mentions the Dynamic Coalition on Gaming for Purpose and suggests that gaming can act as a universal language.

Major Discussion Point

Major Discussion Point 2: Role and Impact of Dynamic Coalitions

Intellectual property rights need consideration in the digital environment

Explanation

The importance of intellectual property rights in the digital realm needs to be addressed. Protecting intellectual property is crucial as digital technologies continue to evolve and impact creative industries.

Evidence

The speaker announces the launch of a platform called Intellectual Property Verification to protect intellectual property in the digital area.

Major Discussion Point

Major Discussion Point 2: Role and Impact of Dynamic Coalitions

J

Jutta Croll

Speech speed

220 words per minute

Speech length

472 words

Speech time

128 seconds

Dynamic coalitions are already working to implement the GDC objectives

Explanation

The 31 existing dynamic coalitions are actively engaged in work that aligns with and implements the objectives of the Global Digital Compact. Their ongoing efforts contribute directly to the realization of the GDC goals.

Major Discussion Point

Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation

Agreed with

Muhammad Shabbir

Olivier Crepin Leblond

Tatevik Grigoryan

Yao Amevi Amnessinou Sossou

Agreed on

Dynamic coalitions contribute to implementing GDC objectives

M

Mark Carvell

Speech speed

136 words per minute

Speech length

2242 words

Speech time

988 seconds

Coalitions provide a platform for year-round IGF activity and multistakeholder engagement

Explanation

Dynamic coalitions serve as a mechanism for continuous engagement within the Internet Governance Forum ecosystem. They enable ongoing multistakeholder participation and work throughout the year, beyond just the annual IGF event.

Major Discussion Point

Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation

Agreements

Agreement Points

Dynamic coalitions contribute to implementing GDC objectives

Muhammad Shabbir

Olivier Crepin Leblond

Tatevik Grigoryan

Yao Amevi Amnessinou Sossou

Jutta Croll

Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits

Coalitions on internet rights, children’s rights, youth engagement, and internet standards contribute to fostering safe and inclusive digital spaces

Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance

Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance

Dynamic coalitions are already working to implement the GDC objectives

Multiple speakers highlighted how various dynamic coalitions are actively working on different aspects that align with and contribute to the implementation of GDC objectives.

Similar Viewpoints

Both speakers emphasize the importance of dynamic coalitions in addressing economic and developmental aspects of digital inclusion, particularly focusing on job creation and connecting the unconnected.

Muhammad Shabbir

Audience

Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits

Coalitions should focus on creating jobs, connecting the unconnected, and data integrity

Both speakers highlight the importance of data governance and integrity in the work of dynamic coalitions.

Tatevik Grigoryan

Audience

Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance

Coalitions should focus on creating jobs, connecting the unconnected, and data integrity

Unexpected Consensus

Proactive role of dynamic coalitions in shaping GDC implementation

Audience

Jutta Croll

Mark Carvell

Dynamic coalitions can proactively set the agenda for GDC implementation

Dynamic coalitions are already working to implement the GDC objectives

Coalitions provide a platform for year-round IGF activity and multistakeholder engagement

There was an unexpected consensus on the proactive role dynamic coalitions can and should play in shaping the implementation of the Global Digital Compact, rather than just responding to it. This suggests a shift towards a more active and influential role for these coalitions in global internet governance.

Overall Assessment

Summary

The main areas of agreement centered around the significant contributions of dynamic coalitions to implementing GDC objectives, their potential for collaboration, and their role in proactively shaping internet governance discussions.

Consensus level

There was a high level of consensus among speakers regarding the importance and potential impact of dynamic coalitions in addressing various aspects of internet governance and implementing the Global Digital Compact. This strong consensus implies that dynamic coalitions are likely to play an increasingly central role in future internet governance processes and in the implementation of the GDC.

Differences

Different Viewpoints

Unexpected Differences

Emphasis on specific technologies

Audience

Yao Amevi Amnessinou Sossou

Gaming and gamification can be integrated across sectors to contribute to social good

Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance

While most speakers focused on broader themes, these speakers unexpectedly emphasized specific technologies (gaming, IoT, blockchain) as key areas for Dynamic Coalitions to address. This highlights a potential difference in approach between technology-specific and theme-based coalition work.

Overall Assessment

summary

The main areas of disagreement revolve around the prioritization of focus areas for Dynamic Coalitions, the balance between proactive agenda-setting and ongoing work, and the emphasis on specific technologies versus broader themes.

difference_level

The level of disagreement among speakers is relatively low. Most differences appear to be more about emphasis and prioritization rather than fundamental disagreements. This suggests that there is a general consensus on the importance of Dynamic Coalitions in implementing the GDC objectives, but some variation in how different stakeholders believe this should be approached. These differences could potentially lead to a more comprehensive and diverse approach to addressing digital governance challenges, provided that effective coordination and collaboration mechanisms are in place.

Partial Agreements

Partial Agreements

Speakers agree on the importance of Dynamic Coalitions in implementing the Global Digital Compact (GDC) objectives, but they differ in their emphasis on proactive agenda-setting versus ongoing work and engagement.

Audience

Jutta Croll

Mark Carvell

Dynamic coalitions can proactively set the agenda for GDC implementation

Dynamic coalitions are already working to implement the GDC objectives

Coalitions provide a platform for year-round IGF activity and multistakeholder engagement

Similar Viewpoints

Both speakers emphasize the importance of dynamic coalitions in addressing economic and developmental aspects of digital inclusion, particularly focusing on job creation and connecting the unconnected.

Muhammad Shabbir

Audience

Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits

Coalitions should focus on creating jobs, connecting the unconnected, and data integrity

Both speakers highlight the importance of data governance and integrity in the work of dynamic coalitions.

Tatevik Grigoryan

Audience

Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance

Coalitions should focus on creating jobs, connecting the unconnected, and data integrity

Takeaways

Key Takeaways

Dynamic coalitions are actively contributing to the implementation of the Global Digital Compact (GDC) objectives across various domains

Dynamic coalitions provide a platform for year-round IGF activity and multistakeholder engagement on internet governance issues

There is potential for increased collaboration between dynamic coalitions to address common challenges and barriers

Dynamic coalitions can play an important role in data-driven policymaking and national assessments related to internet governance

The work of dynamic coalitions is relevant not only to the GDC but also to the Sustainable Development Goals and WSIS+20 Review

Resolutions and Action Items

Dynamic coalitions to prepare contributions for the next IGF in Lillestrøm, Norway in 2025

Coalitions to focus on creating tangible outcomes and impacts related to GDC objectives

Encourage more stakeholders to join and participate in dynamic coalition activities

Unresolved Issues

Specific mechanisms for dynamic coalitions to formally contribute to GDC implementation process

How to measure and evaluate the impact of dynamic coalitions’ work on GDC objectives

Ways to increase visibility and recognition of dynamic coalitions’ contributions within the broader IGF ecosystem

Suggested Compromises

Balance between coalition-specific work and collaborative efforts across coalitions

Integrating new topics like gaming and intellectual property rights into existing coalition frameworks

Thought Provoking Comments

While many in the global internet community […] are fully engaged with and attuned to the developments surrounding WCIT, WTSA, and ICANN, […] it is becoming increasingly apparent that a greater degree of polarization and marginalization in the area of internet policy and strategy has been slowly occurring.

speaker

June Paris

reason

This comment highlights a critical issue of inequality and marginalization in internet governance that is often overlooked.

impact

It shifted the discussion to focus more on inclusion of underrepresented groups, particularly small island developing states, in internet policy.

At least one-third of Internet users are under 18 […] and their rights must be protected as outlined in the U.N. convention on the rights of the child.

speaker

Olivier Crepin-Leblond

reason

This statistic provides important context about the demographics of internet users and raises the issue of children’s rights online.

impact

It brought attention to the need for child-centric policies and protections in internet governance, which was further discussed by other speakers.

Your interventions are so tremendously important […] I’m speaking of content, of wanting to change things and help with the implementation. Everybody else is talking about process.

speaker

Wout de Natris

reason

This comment emphasizes the importance of focusing on concrete actions and implementation rather than just process.

impact

It sparked a discussion about how Dynamic Coalitions can be more proactive and action-oriented in contributing to the Global Digital Compact.

We should look for creating jobs, livelihood for all and internet for all. I think we are still 2.6 billion people not connected to the internet.

speaker

Dr. Rajendra Pratap Gupta

reason

This comment refocuses the discussion on core issues of internet access and economic opportunity.

impact

It broadened the conversation to include economic and development aspects of internet governance, beyond just technical and policy considerations.

Dynamic coalitions are doing great work across sectors, but gaming acts as a universal language and I think gaming can be integrated everywhere

speaker

Rayansh Gupta

reason

This comment introduces a novel perspective on how gaming can be leveraged across various sectors for social good.

impact

It opened up a new avenue for discussion about innovative approaches to achieving the goals of the Global Digital Compact.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical aspects of internet governance to include issues of inclusion, children’s rights, economic opportunity, and innovative approaches like gaming. They helped shift the focus from process to concrete actions and implementation, while also highlighting the need to include underrepresented groups in internet policy discussions. The comments collectively emphasized the multifaceted nature of internet governance challenges and the need for diverse, creative solutions.

Follow-up Questions

How can dynamic coalitions collaborate more effectively to address common barriers for marginalized communities?

speaker

Mark Carvell

explanation

This was raised as an important area to explore further collaboration among coalitions working on similar issues.

What specific contributions can dynamic coalitions make to the Global Digital Compact by the 2025 IGF in Norway?

speaker

Wout de Natris

explanation

This was suggested as a way for dynamic coalitions to proactively set the agenda and contribute concrete outputs.

How can the work of dynamic coalitions be measured in terms of impact on livelihood creation and connecting the unconnected?

speaker

Dr. Rajendra Pratap Gupta

explanation

This was proposed as a way to assess the real-world impact of dynamic coalitions’ work.

How can gaming and gamification be integrated into the work of other dynamic coalitions?

speaker

Rayansh Gupta

explanation

This was suggested as a potential area for cross-coalition collaboration and innovation.

How can intellectual property rights be better incorporated into discussions of the digital environment?

speaker

Unnamed participant from Creators Union of Arab

explanation

This was raised as an important area that needs more consideration in digital governance discussions.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #171 Mind Your Body: Pros and Cons of IoB

WS #171 Mind Your Body: Pros and Cons of IoB

Session at a Glance

Summary

This discussion focused on the emerging field of Internet of Bodies (IoB) technologies and their implications for healthcare, cybersecurity, and society. Experts from various fields explored how IoB devices, which integrate technology with the human body, are revolutionizing medical care while also raising significant ethical and security concerns.

The medical benefits of IoB were highlighted, including improved diagnostics, remote patient monitoring, and personalized treatments. However, participants emphasized the need for robust cybersecurity measures to protect against potential hacking and data breaches, which could have severe consequences given the intimate nature of these devices.

The discussion also delved into the societal impacts of IoB technologies. Concerns were raised about potential inequality in access to these technologies, leading to a divide between “enhanced” and “unenhanced” individuals. The ethical implications of altering human biology and the risks of manipulation through IoB devices were also explored.

Participants stressed the importance of developing comprehensive regulatory frameworks to govern the development, implementation, and use of IoB technologies. They emphasized the need for collaboration between governments, private companies, and individuals to ensure responsible innovation and protect user rights.

The discussion concluded by acknowledging the inevitability of IoB advancements while emphasizing the critical need for ongoing research, public awareness, and proactive policymaking to harness the benefits of these technologies while mitigating potential risks.

Keypoints

Major discussion points:

– Medical applications and benefits of Internet of Bodies (IoB) devices, including AI-assisted diagnostics and remote patient monitoring

– Cybersecurity risks and privacy concerns associated with IoB devices

– Potential for IoB technologies to widen social and economic inequalities

– Ethical implications and need for regulation of IoB technologies

– Future impacts of IoB on human evolution and society

The overall purpose of the discussion was to explore the various implications – both positive and negative – of Internet of Bodies technologies as they become more prevalent. The speakers aimed to raise awareness about the potential benefits as well as risks that need to be considered and addressed.

The tone of the discussion was generally cautious and concerned, while still acknowledging the potential benefits of IoB technologies. Speakers highlighted exciting medical applications but also emphasized the need for careful regulation and consideration of ethical issues. The tone became more apprehensive when discussing future scenarios of human augmentation and potential societal divides, but remained analytical rather than alarmist.

Speakers

– Alina Ustinova: Moderator

– Lev Pestrenin: Deputy Department Head of Center for Diagnostics and Telemedicine of the Moscow Health Department

– Igor Sergeyev: Head Researcher of Federal Center for Applied Development of Artificial Intelligence

– Irina Pantina: JR Director of Positive Technologies

– Gabriella Marcelja: CEO of CJ Impact Ventures

– James Nathan Adjartey Amattey: Speaking from Ghana, expertise in technology and regulation

Additional speakers:

– Anna Kralina-Dias: Online moderator (mentioned but did not speak)

Full session report

Internet of Bodies (IoB) Technologies: Implications for Healthcare, Cybersecurity, and Society

This comprehensive discussion explored the emerging field of Internet of Bodies (IoB) technologies and their wide-ranging implications for healthcare, cybersecurity, and society. Experts from various fields, including medical diagnostics, artificial intelligence, cybersecurity, and venture capital, convened to examine how IoB devices, which integrate technology with the human body, are revolutionising medical care while simultaneously raising significant ethical and security concerns.

Medical Applications and Benefits

The discussion began on an optimistic note, with several speakers highlighting the potential medical benefits of IoB technologies. Lev Pestrenin, Deputy Department Head of the Center for Diagnostics and Telemedicine of the Moscow Health Department, emphasised that artificial intelligence and IoB could improve healthcare quality and access. This sentiment was echoed by Igor Sergeyev, Head Researcher of the Federal Center for Applied Development of Artificial Intelligence, who noted that IoB devices enable remote patient monitoring and early disease detection.

Gabriella Marcelja, CEO of CJ Impact Ventures, further elaborated on how IoB implants and wearables can enhance medical diagnostics and treatment. Specific examples of IoB devices mentioned included smart lenses, insulin pumps, and cochlear implants, illustrating the diverse range of applications in healthcare.

Cybersecurity Risks and Privacy Concerns

The conversation shifted to address the serious cybersecurity risks associated with IoB technologies. Irina Pantina, JR Director of Positive Technologies, warned that IoB devices are vulnerable to hacking, data breaches, and unauthorized access. She stressed the urgent need for robust cybersecurity measures to protect IoB users. Gabriella Marcelja concurred, highlighting the risks of surveillance and privacy violations posed by these technologies.

Alina Ustinova provided a thought-provoking insight: “Sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer.” This comment effectively shifted the discussion towards deeper consideration of the ethical concerns surrounding IoB technologies.

James Nathan Adjartey Amattey, speaking from Ghana, raised a unique challenge specific to internal IoB devices: “Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that?” This comment highlighted the complex technical and ethical challenges associated with updating or fixing internal IoB devices.

Ethical Implications and Societal Impact

The discussion then delved into the broader ethical and societal implications of IoB technologies. Gabriella Marcelja raised concerns about the potential for IoB to widen socioeconomic gaps and create “augmented elites”. She cautioned, “So wealthy nations will adopt augmentation tech faster. So we will see a marginalised group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient.” Marcelja drew an analogy comparing IoB device access to organ transplant waiting lists, highlighting the potential for inequality in access to these technologies.

James Nathan Adjartey Amattey emphasized the potential for manipulation and social engineering through IoB devices, as well as their possible use as tools for government control or weapons. He stressed the importance of community awareness and education about IoB technologies, especially in developing countries.

Gabriella Marcelja also raised the possibility of a “gray economy” of fake implants emerging, further complicating the ethical landscape of IoB technologies.

Governance and Ethical Considerations

The need for comprehensive governance frameworks to address the ethical and societal challenges posed by IoB technologies emerged as a crucial point of discussion. Irina Pantina emphasised the importance of multi-stakeholder cooperation in governing IoB responsibly. She stated, “So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies.”

Lev Pestrenin highlighted the need for multidisciplinary teams and improved communication to address the challenges of IoB. This point underscored the complex nature of developing and implementing such rapidly advancing technologies.

Conclusion

The discussion concluded by acknowledging the significant potential benefits of IoB technologies in healthcare while emphasizing the critical need for ongoing research, public awareness, and proactive policymaking to mitigate potential risks. The speakers demonstrated a moderate level of consensus, particularly on the need for responsible development and governance of IoB technologies.

The conversation raised several thought-provoking questions for future consideration, including strategies for managing cybersecurity risks, preventing social inequality due to unequal access, maintaining personal autonomy in an increasingly connected world, and addressing the potential for manipulation and misuse of IoB devices. These questions highlight the need for continued interdisciplinary dialogue and research to address the complex challenges posed by Internet of Bodies technologies.

Session Transcript

Alina Ustinova: The devices are used for the medical purposes, but because of that new technology, we should understand how it will affect not even us as human beings, our evolution, but also the cyber security issues and the emerging technologies issues. So we will try to understand during this session how this technology will affect us. And we’re joined by the wonderful speakers here. I will introduce them all shortly and they will try to answer your questions. So how the discussion will went, we will be divided by blocks. The first block will be medical. Then we have a little Q&A session. Then we have the AI emerging technology block, and then we’ll have a little Q&A session. And then the last one is cyber security block. But it depends on how you went. And if you want to ask questions after the session, it’s possible. The organizer asked us to be a little bit earlier, so we’ll finish about 10.50. So thank you very much. So today we’re joined by Lev Pistranian, he’s the deputy department head of Center for Diagnostics and Telemedicine of the Moscow Health Department. We also have Igor Sergeyev, the head researcher of Federal Center for Applied Development of Artificial Intelligence. He’s online. Gabriella Marchella, the CEO of CJ Impact Ventures. Irina Pantina, the JR director of Positive Technologies. James Amati of Norrison IT, he’s also with us online. And also we have a wonderful online moderator, Anna Kralina-Dias, who will help me in this session, I hope, and will ask the question coming from the chat online. So we’ll start with the medical block, and with understanding of how these devices are used. in the medical purposes. So Lev, please start. The floor is yours. Thank you.

Lev Pistrenin: Thank you very much. Good morning, everyone. Thanks for coming. I am a researcher in the Moscow Center for Diagnostics and Telemedicine. And we implement artificial intelligence in healthcare, specifically in radiology. So why do I sit here? I think I’m sure that artificial intelligence and Internet of Bodies have something in common. Both are new technologies and we want to benefit from them while we also want avoiding risks of these technologies. So today I’d like to show you on the example of artificial intelligence implementation in the healthcare, I’d like to show how is it possible to really benefit and to avoid risks at the same time. One moment. Something is wrong with my presentation. Yes. Okay, thank you very much. So in a few words, what is radiology in Moscow looks like? As usual, patients undergo radiological studies for example, chest X-ray, then images. all images from all hospitals of Moscow get into the data center which is located in our Center for Diagnostics and Telemedicine and after that radiologists describe these images, make reports and in one, two or sometimes a little bit more hours a doctor and patient already has results of the examination. Next slide please. Okay, so it is possible due to centralized healthcare system, it takes several, it took several years to centralize data to centralize all radiological descriptions and now it is possible, it was possible to start Moscow experiment in implementing artificial intelligence in healthcare. So we started this experiment in 2020 and we had a lot of difficulties, a lot of questions at the beginning because it’s new technology and there were no answers how we could use it and avoid different risks of artificial intelligence. But generally we managed to overcome all these risks of artificial intelligence in healthcare. So today I’d like to tell you about three components, three main components which were the key components for this success. The first component is data. Data sets, high quality data sets are very important for artificial intelligence training and testing. So what is data sets? It is a set of radiological studies with the report and using these data sets it’s possible to train artificial intelligence to find some kind of pathology like pneumonia or cancer for example and it is possible to test artificial intelligence. Basically for data, following one of the main principles, organized storage and organized collection of data. The second principle is artificial intelligence scientists, researchers, developers all over the world said about artificial intelligence, about its power, someone said that it is possible that artificial intelligence could be used and doctors will stay without work. But now we see that it is not so. Artificial intelligence is a great help for doctors. It could make measurements. it could help doctor to find some pathology, but now artificial intelligence can not replace doctors at all. So that’s why is to monitor artificial intelligence work. And we developed and successfully used life cycle for checking the quality of artificial intelligence. We do it regularly every month and we make an assessment of artificial intelligence to be sure that medical care is of high quality. And this life cycle also helps us to improve artificial intelligence because we can find mistakes of artificial intelligence and we could change these AI services and improve their quality. And the third principle is ethics. It is the basic principles of medical ethics which were used for many centuries and we still follow them. So one of the main principles is privacy of patients, is safety of patients and patient’s confidentiality. So artificial intelligence could improve quality of healthcare, but without following ethics, I think it is impossible to use it because it can bring more harm for patients than benefits. So, due to these three components, high quality datasets, the monitoring of artificial intelligence and the third component is ethical principles, we could provide for patients a very simple way of radiological studies. You see it here, so it is as usual, undergoing radiological examination, then doctor describes, write a report, and after that patient and physicians get the results of these studies. And here on the step three, you see that now doctors, radiologists, get two images for every patient. First is a native image, and the second is an image which is processed by artificial intelligence. So that’s why it is possible to use artificial intelligence and benefit from it. And what are the key achievements, key results of implementation of artificial intelligence? Here you see some facts, but generally speaking, there are three main achievements. The first is improved quality of healthcare. The second is eased access to health services. And the last principle is enhanced safety of patients. And I’d like to show you one more slide about artificial intelligence, how it works. You see, we have more than 50 AI solutions in the Moscow. And they could detect different types of pathology on different types of studies, like x-ray, computer tomography, or MRI, for example. And to conclude my presentation, I’d like to say that artificial intelligence, like Internet of Bodies, it’s our possible future, which could help us to live longer and to be healthier during our life. So, I’d like to invite you to visit our center to learn more about artificial intelligence in healthcare. And it’s very easy to visit us, and you could organize a visit. We will be happy to see you in our center. Thank you very much.

Alina Ustinova: This was a very interesting presentation, I guess. People learned a lot about it. And maybe a small question before we move to the next speaker. In your opinion, in your experience, how do you think the introduction of this device or integration into the human body will affect the human evolution? Like, for example, will it be part robots? Just a futuristic question.

Lev Pistrenin: Yes, thank you for the interesting question. I think you are right. Maybe in the future, we will look like robots a little bit. Now we don’t know all technologies, there is a very interesting thought that now, on a short period of time, we tend to overestimate technologies and for a longer period of time we tend to underestimate technologies. So after 20 or 30 years, maybe we have bionic eyes or something like that and yes, we would do our work faster, I think, and we can get a lot of benefits. But I’m sure that using of all these devices should be controlled first of all by the doctors because these devices never should be harmful for people.

Alina Ustinova: Okay, thank you very much for your answer. Now we move to the next speaker and we’ll try to understand how monitoring devices are actually making people’s lives easier, especially in the remote areas where the hospital is not an option. For some people, Igor, please, the floor is yours. Thank you.

Igor Sergeyev: Good morning, ladies and gentlemen. I will present the state of the home care sector in terms of our implementation, our available and available diagnostics and vital sign monitoring devices. The pandemic COVID-19 has become a catalyst for the development and education and production of individual devices. for the monitoring with Alzheimer’s. Russian Federation enormous territory and affects this clinical institution. Sometimes like I do for the patients, there’s these kinds of devices, the only solution from patients condition monitoring. Such device include a range of CO2, ECG, and aspiration recorders, abnormal glucose and cholesterol levels analyzed, and another devices. Innovative companies of Russian Federation including Skolkovo Residence introduced this development of personalized medicine, implementing software solution based on them, including artificial intelligence to improve their level of diagnostics of reduced voluntarily among the adult population. Currently, the registry of such solution in the Russian Federation includes software, natural network system, care mentor, IEEE. This is the website, http://carementor.ru. Software sales, websites, http://sales.ii. Software artificial intelligence systems for analysis of cardiology studies, websites, t-stars.ru. System for supporting in medical decisions, electronic clinical pharmacologist, these websites, www.sp.upb.com. Software system for supporting medical decisions, webioMEDS, webioMEDS.ii. The rapid development for monitoring functional indicators began at the turn of the 2000s. widespread implementation in the system of high-tech care began much later. An example for such projects implementation in the Russian Federation can be Medicare and innovation of professional intelligence systems. Development and implementation of head indicator remote monitoring services will provide undeniable advantage for all interested participants since the use of the services will allow to reduce the level of monetary and disability of the population due to early directions of development cardiovascular risk and endocrine diseases in citizens risk disease acceleration. And the process of continuous monitoring and analysis of patient’s health indicators to ensure doctor’s emergency identification in case of the patient’s monitoring indicators critical deviations. To increase the number of patients directed by observation to increase the quality of medical care for a population without this need for frequent visits to medical institutions to reduce the cost of medical institution by reduction or the need for stabilization. To improve medical knowledge based by processing and analyzing the alleged amount of patient medical data. My report completed. Thank you for attention. I hope you’ve passed questions.

Alina Ustinova: For the presentation I guess that you’re right in telling that sometimes when the medical service are not available. It is better to have some device that can track your medical conditions and you’ll be able to send it to the healthcare institution to understand what kind of illness you have. And that’s why it causes another risk, as we’re going to talk about, cybersecurity risk. Because we know that we can talk about data security, but what about human security? For example, sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer. So the main question is how to get rid of it. Please, Irina, if you can say something. How can we manage the cybersecurity risk into cyber human risk, maybe some kind of that?

Irina Pantina: Thank you, Alina. Take my headphone off. Let me start from maybe just introduction into the types and classification of different types of internet of bodies. I would highlight this from a perspective of whether it can influence you as a human or your body or not. And the first types of devices that also need to be protected are so-called wearable devices. And their main feature is to promote the data to a central storage, to a central data house. And the main risks arise from these types of devices and data transferring as data leak. And all the risks that we know about data leak are applicable for these types of devices, for this group of things. Another group is more interesting from a cybersecurity perspective as a group of devices that could interact with a central processor and that could send a manageable influence on your body. For example, we do not expect that cochlear implant could manage the tone and could send a signal to your ear, even if you are deaf, but it can manage and it could change your behavior. Next example that Igor mentioned and devices that Igor mentioned, insulin pump. When you can change the dosage on remote, then you can influence and you can make different injuries into the bodies. And all of you remember the fresh example of September terrorist attack with smartphones and pagers in Lebanon and Syria. And that’s an example how this transferring signal would damage big groups of people. And if you are talking about widely spread technologies, so for the people with the different types of disease, they could be an interesting and a very important targets for ethics, for criminals of different types of criminals. And you know, this example, when this types of cyber risk weren’t fixed, when one of the heads of US get a pacemaker as an implant, asks to turn it off from a remote management due to the risk, when he got this chair, a very high level chair, he asked to turn off this remote managing due to the risks being attacked and being hacked with criminals. So, and from this perspective, we should talk about two types of influence, the data leakage and the total control of doing impossible and doing an acceptable event. We name it an acceptable event and we have to prevent such an events. using different types of organizational, technical, instrumental features and practices. And let’s talk about what we can get in case we do not think of the cybersecurity risks in terms of Internet of Bodies. The first one is we’ve got the credibility gap of the final users, of the end users of these technologies. And as Igor mentioned about providing healthcare for remote regions, for different types of people who can get healthcare fast and quite close to the place of living, it become a vital problem for them. And it arise a lot of questions around that. And we have choice in between whether we can fix this risk and face them and find ways how to work with them. And on the other hand, to ignore that and face the problems of lacking healthcare, impossibility to help people to live their best life. And as Alina mentioned in her introductory speech, Internet of Bodies is kind of part of Internet of Thing topic and it arise a good point for us because we know a lot to do with the Internet of Things from different points of view. in terms of what every actor has to do, managing cyber risks. And as we are here on United Nations floor and we are sharing our multi-stakeholderism framework, I got some inputs for every participant, and every group of the participant from this perspective. Well, what have the producers and providers of internet of what it can do in this topic? The first one is to think of cybersecurity as a vital question, as a very important point. And as we think about cyber security, about governmental and environmental and social points that are important for every company, then the cybersecurity is crucial for the company as well. So if we can place it in one row, we can talk about ESGC concept, the ESGC framework as a good practice for managing companies. Another point is to test the devices they provide to the end users, to the consumers, and using different practices and inviting the best analysts worldwide. For that, we’ve got some platforms that are well-known as back-bounded platforms that could help to test in circumstances quite close to real ethics, quite close to real life. and techniques that criminals can use. And it helps to improve the systems. It helps to make it more sustainable and applicable for end users. Another one is, another group is cybersecurity providers. They got their own responsibility in improving their own skills, their own expertise in testing internet of bodies with their own focus on specific of it, that it’s not just something that implemented somewhere so far, but it is implemented in the individual’s body. And it has an influence and has an impact on the person. And the second one to provide specific solutions for protecting, it could be software, hardware items, it could be processes and organizational features. All the instruments that we have should be included into the consulting services by the experts in cybersecurity. And the third pillar is governments and authorities. And I think they have to do, they used to. So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies. So, and to conclude my. I would want to make one point that if we do not take into account the cyber risk or if we underestimate it in this exact topics, in this exact topic Internet of bodies, then we have to pay a very high price. And that’s the human life. Thank you.

Alina Ustinova: You covered a lot of topics that are actually kind of at risk today. And just to shoot like a quick question, you mentioned lots of cases where the devices were hacked and like the very bad things happened when the human lives were lost. Can you talk a little bit about like the human sanity, you know, when, for example, there was an episode in Black Mirror, where a person was wearing the lenses, and his memories were not changed but he was caught on that and repeating them correctly. Is it just a point to regulate the usage of the IEB devices by humans themselves so they don’t harm themselves with them, or is it just a point of free will, and we should leave the fate of human in his own hands, just like like that. I think. Thank you for a very important question.

Irina Pantina: I would say that the individuals on the one hand side are rather smart, and they can understand what impact and what influence has a different devices to their lives from a positive side of usage, but they underestimate the risks. It must be. From my perspective, it must be it must be open discussion of these topics started by the experts. And there must be a balance of the views that enthusiasts who can try and who understand the risks to discover the steps that we do not know at the moment. Some risks arise and could be clear after a period of usage, after entering different situations. And in this sense, so-called ethical hackers could play a very important role because they can test from unpredictable ways, from unpredictable points of views to identify the risks, to find a way how to fix them and how to provide the secure solutions as a result. Thank you.

Alina Ustinova: Very much. Yes, I guess you’re right in a point. And now we need to also cover another risk that is not always recognized probably properly when we speak about using this device, because when we use devices like phone or a tablet, we do not actually see the difference between different marks of the phone or maybe the usage, but if we put a device into the human body, it will be seen. So the main question is won’t some IEB devices cause the segregation of people? Like for example, you will have the device that is very costly and a very high price within the human body. And he has the pros of the… on his lies that the person with the cheaper devices doesn’t so how can we avoid this Gabrielle if you can share with us your point of view.

Gabriella Marcelja: Yes, thank you. Thank you very much for for inviting to this panel. So, in terms of segregation you actually are hitting the jackpots you know with this question because we will have to tired future you know so the enhanced people, and the unincorporated so those who are kind of living like our regular normal life as we do right now. So, when we talk about body augmentations powered by AI, and the Internet of Bodies technologies, such as again, neural neural implants biometric prosthetics and so on. So this is actually technologies could widen the existing gap in the social economical classes. So this is something that, you know, we need to understand, because we will definitely have advantages in intelligence and strength in health, maybe So this is for sure, rising a few ethical questions going forward. Perhaps we should think also on how to answer some of the topics related to access to augmentation so should body encasement be treated as a basic luxury, sorry basic right or luxury topics about authenticity so will humanity actually lose the line between natural and synthetic existence. You do also have topics related to decision autonomy as was mentioned also by by the colleague here before. So who actually decides what is acceptable right body augmentation is that the government is corporations individuals. doctors right so like all of these are actually topics that need to be discussed because we need to think as a patient centered healthcare going forward right so like who is the ultimate decision maker of course it’s us individuals but the doctors are the one with the knowledge but on the other hand like they’re have the knowledge of the body the technology is most probably a private owned entity which knows the technology so they will sell the ideas to the doctors so it’s it’s a very complex i would say setting and the ecosystem of course will be regulated by the government so this is like something that we need to in general think when it comes to you know this future social economics integration that is going to happen there is no way out of it whenever you keep on getting new technologies and and you try to do something new for sure some impact one way or another is going to happen so here we can also mention the so-called ai-powered workers so we’re not talking about the robots but enhanced humans that could outperform normal workers like us right now so eventually you won’t need eventually a lot of vitamins or something i don’t know anti-stress pills or vitamin d you know every day so perhaps we will you know be eventually a little bit more faster smarter in that sense we also have then you know the access to health augmenting implants so this could you know of course raise the question of wealth right so it will create divides so the augmented elite if you will and the unprivileged bio-traditionalist let’s put it that way so of course you can always pick but if you want to kind of power up perhaps some people will choose a new way of existence. And we don’t know how is this going to be perceived. Is this going to be perceived as something cool? Or is this going to be perceived as, oh, you’re sick? So it’s a little bit also of this type of thinking that we could, in general, discuss. And I can’t say that I’ve been on many panels that were discussing these type of issues. So I’m more than happy that we keep on discussing these type of things. And in general, global inequality will definitely be a topic. So wealthy nations will adopt augmentation tech faster. So we will see a marginalized group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient. And on this point, if I may, I would just continue also a little bit on the harm that these technologies can do. Because we do need to understand that the IOB devices, like, again, smart lenses, are capable, for example, of recording everything you see. So this could actually revolutionize health care and law enforcement as well, but also pose major threats. Because we have the surveillance and privacy risks on one side. Because these type of lenses could easily be some covered surveillance tools. And without regulation in this sense, they could for sure record people without their consent. Enable, of course, governments, companies to track citizens every move. We have different, we have this that is already doing all of this. And now if we put, of course, this on our bodies, this is just going to implement it even more. Create, of course, deepfakes, realities, manipulating basically just some video content recording through these lenses. Eventually, any type of exploitation that you can think of from corporate manipulation, the corporate world is focused on profit. So we need to make profit in order to be sustainable. Because otherwise, we need to lay people off. And it’s not a good way of doing business. So recording users’ environments. course, like, deliver, you know, ultra targeted ads. So you have meta, or I mean, with Facebook, Instagram, and all the similar platforms that of course, are using the data that we see right now, as their actual capital. But this is also, you know, an open door for blackmail, social control by hackers also who can access footage, and can, of course, exploit these private moments of individuals. So in this, I would say, context, we definitely need to, you know, think about the moral implications, and think about some comprehensive governance strategies that definitely must ensure ethical approval of devices, strict licensing, I would say and penalties for any misuse, and just very transparent algorithms that monitor how data is stored, and where shared access, so all of this, right now, AI has still big problems to solve. So we hope in the intelligence of the experts and the analysts that that are working on that, to actually, you know, work on all the black box and all of the issues that AI has right now, and the biases that it create, create. So this is, in any case, something that I would say rises, cyber security, and AI driven privacy violations and cyber crime risks, ecosystem that need to be monitored, and definitely talked to before, you know, we become kind of guinea pigs, and we don’t have like, a clear understanding of where can this go? And what do we do? So we for sure are individuals and patients for our doctors, we will for sure be the ultimate decision makers, we will for sure sign papers that we understand the risks, there is no other way out of this. But at the same time, the supply chain from the idea to the development and then to the implantation in the body definitely needs to be monitored. And for sure also the manufacturing and the understanding of how can you fix if something breaks. So where do you go like, oh, my implant is not working, I’m glitching, where do I go? We have like some, you know, centers like, like phone centers where you just like go and repair yourself. So this is a little bit of the supply chain that eventually would need to be thought of together with all the ethical implications at hand.

Alina Ustinova: You actually covered lots of things that we’re trying to understand with this new emerging technologies and one you mentioned that, yes, we have phones and we use it every day of our lives. And if it will be put inside our bodies, it will definitely change. So I will go to James and ask him to start with that question. How do you think, James, what, how can a person be offline if a device put in him make him online like constantly? And please share with us your input on how Internet of Bodies will be developed in the future.

James Nathan Adjartey Amattey: Yes, thank you very much. My name is James. I’m talking from Ghana. I do hope I’m clear. Now, Internet of Bodies is a very, it’s not new, but it’s, it’s one of the emerging components of Internet of Things, which embodies the integration of chips into regular devices to be able to track, collect and analyze data. Now in, in this new realm, for example, when we look at health, we are looking at the three phases of health, that’s preventive, curative, and then protective, right? So, for example, when we look at a disease like asthma, a doctor wants to know what the patient’s triggers are, when the triggers happen, how often they happen. And sometimes, all of that consultation cannot be done in the hospital. Now, there are certain cases of autoimmune diseases that do not have any known, should I say, any known cure. So, the use of IOB can help doctors and then help hospitals and health facilities to be able to determine what causes it, how long, what the triggers are, and what can be done to prevent it. Now, unfortunately, there’s life after the hospital, and the person has a different life other than the condition they are faced. So, it is very difficult for you to constantly track them and put a cost on it, put them online. Because of certain risks, we’ve, other members of the panel have spoken about cybersecurity risk. Other members of the panel have spoken about data risks, but I would look more at social risks, right? So, for example, we are looking at a problem of dependence, right, how are we going to make sure that the patient does not become too dependent or over-reliant on the implant, right? So, for example, unless it is an artificial limb, that actually helps the patient to walk. If it’s a pill, so there is this, should I say, there’s this name. of pills that have cameras, sensors that stay in the body and then actually record internal body action. And we want to ask ourselves, how long is that allowed to stay in the body? How much of an influence does it have on things like genome editing and DNA configuration and DNA programming and how much of an effect can that alter the unique character of the person? So if you’re, you know, and also what kind of data are these things sending? Right, so we do have something we call the CRISPR-Cas9, which is a form of genome editing. That uses bacterial immune system data to modify DNA and the DNA of living cells. So that means that currently there is data and there’s research and there are certain elements of IOE that can modify that component of your DNA. And that modification then leads to modification of character, modification of behavior, modification of influences, and that could have very long-term effects. Now, one of those issues is an issue of manipulation and social engineering, where there’s something we call a hyper-personalization syndrome. Now, currently, if you are, let’s say you and your friend are having a conversation on WhatsApp or any of these social media platforms, when you jump onto- another platform, you could just see an ad of, oh, you know, the what you were talking about with your friend. Now, how about this noun going into your body and then actually understanding your feeling of what are you feeling currently? That could have very diverse issues. So, for example, when we talk about suicide and mental health, we want to know that how often can a device manipulate and influence suicidal thoughts, and how often can it manipulate the person to take on these suicidal thoughts. So, this real-time modification of the DNA can lead to real-time manipulation, whereby the person is online, or should I say the person is offline, but then there’s an internal online modification. Now, we do have what we call data breaches, right? So, data breaches sort of happen when the data that’s supposed to be stored in a particular place has been given unauthorized access to by a third party, either through hacking, breach force, or, you know, the use of spyware. Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that? Now, also, we have to look at things like, when we talk about devices and software, we have to look at things like updates. Now, updates are a very thin line when it comes to human computer interaction. Now, if your phone is updating on its own, it has the ability to influence the behavior of applications and also the data that they use, right? And also the permissions these applications have. So if you are looking at internet of things and internet of bodies, we have to be able to design, should I say updates mechanisms and frameworks that do not have adverse issues or adverse effects on the body. We have to look at compatibility. We have to look at biology. We have to look at the device version and that, you know, those things are things that are currently not regulated. I know in the health field, you know, it might be a bit different. I’m not too much of a doctor, but from the, should I say, national regulatory framework, I am here to see a comprehensive study and a comprehensive, should I say, a comprehensive law that caters for devices and their updates and then the changes these updates will bring to the body. Now, all of these data that we are doing and all of these real-time tracking of person that is online, even though the person exists offline can lead to things like predictive profiling. So currently in analytics, you do what we call personal definition where you are looking at the psychographical and nature of the person. So what the person thinks. what a person feels or influences their buying decisions. Now with IOB and then DNA programming, there’s this ability of companies or people who have malicious intentions to use the data that our body provides them through these devices to now profile us and then use things like predictive analysis to then determine what’s can, you know, what somebody is more likely to buy. So it’s now takes it from, you know, device and marketing strategies to now tweaking devices that are in your body. So for example, we have the, the new venom of, should I say, wearable devices, especially for the eye. And these have the tendency to pupillate the iris and then send different signals to the brain. And now these things then lead to what we call attack on cognitive security. Once you bring the brain into the picture, then you now have an attack on the cognitive behavior of the person. So you are now going to exploit cognitive biases. You now have to deal with brain computer interface vulnerabilities. So for example, we have issues of a coin that is being implanted into the brain to allow differently abled people to be able to interact with computers. to be able to move things. Now how much of do we have to create a balance between re-enabling that person and integrating that person and also being that person to be independent of that device and you know having the personal intuition or the personal drive to be able to turn on and off right. So I think that these are some of the things that we could look at in terms of how can a person be able to turn off some of these devices that are in them, how much power do they have over their devices, is it a matter of manufacturer versus patients and who wins or if there’s a an issue with the government and now the government now becomes a third party between the manufacturer and the patients, who who’s going to win in the ability to determine how when and where their data is used. So

Alina Ustinova: thank you very much. I guess you covered lots of issues and it’s very interesting to listen to you and you mentioned actually the thing called genome changing and I guess it’s very interesting aspect that no one said before that and before we move to the Q&A session I have a question for every speaker. You know yesterday Henry Kissinger said a very interesting thing. He said that the future of humanity eventually and to survive humanity needs to have a bias into this with AI. So maybe you can give comments. Do you agree with this point of view? Do you think that eventually we will just become part AI, part So human, etc. And then we move to Q&A. But just a short answer, like, who wants to start? Irina, please.

Irina Pantina: Let me start first. Let me be the first. I believe that humans are smart enough to implement AI to the fields that they can get the positive response, the positive impact, rather than the negative ones. And we have enough power at different levels to fix the negative. From time to time, I’m sure we will have some examples of misusage of this technology. But at the same time, in parallel, we will have examples how to fix it.

Alina Ustinova: Does anyone want to add something and tell his opinion or hers? Gabriella?

Gabriella Marcelja: I’ll be just very quick. I think this is, you know, a philosophical question, you know, where do we want to go as humans? And perhaps at the level of Kissinger, perhaps at the level, you know, global strategists, like you already are trying to understand what else, right? So perhaps this is a way to go. I think it’s an inevitable evolution of the technology. Because once we reach a certain limit of development, whether it’s a country or company, people need to think of, you know, need to sit, think, and then decide, where do we go now? So it’s just a matter of understanding the possible. futures, and if we like those options or not. At this moment, if everyone is feeling excited about this, I think the humanity wants to try it out, not knowing what will happen. So it’s more than the curiosity inside of us that wants to keep moving. Oh, let me see what will happen, even though we most probably won’t be happy with the result. But it’s just human nature to keep pushing in the known when we already have everything. So I think we definitely need to fix the I would say the basic problems of the world. But since the technology has gone so much forward, and we’re, you know, of course, there is still like the count quantum computing and all the cybersecurity issues that come with it, like, but we are still very far from that we’re far from, you know, getting all the AI chips running and so so many things for humans to do. But they think this is just like an additional, you know, curiosity, that just people want to see happening. And, and we, I guess, will live long enough to see what happens.

Alina Ustinova: Okay, now we’re ready to if no one has anything to add, we’re getting to the questions. So if everyone has any questions, please raise your hand. And if anyone in the line has any questions to write it and Anna will read your question to our speaker. So do we have any questions? No one has any questions. Okay, I will ask my question, which I’m very interested at. We covered actually lots of aspects of the time, but I, I like the futuristic works of art, especially in terms of when we see it in games. in the movies. And in the future, they actually show that we have a very gloomy future if we come to use these technologies. For example, like in many games, you show that if you have the medical insurance that is covered more, that is richer, and you have, as I said before, the implants that are costly, you will have the medical attention you need and you will live more than the person who doesn’t have it. And how do you think, maybe it’s a very cynical question, but will the Internet of Bodies eventually let that to the people that some companies, not government, will regulate who will live and who will die depending on which kind of implants it has in their own body? Do we’re moving from the government regulation to the company regulations in terms of the technology? So if anyone has anything to add, you are welcome to add something

James Nathan Adjartey Amattey: to that. Yes, if I may. Yes, I think it has to go hand in hand. We cannot just regulate the companies and leave the governments. The governments also have to regulate themselves. So we, because in this era where we have a lot of wars and just interruptions to global peace, you know, governments can now use some of these things as tools to be able to program, to be able to, shall I say, reprogram people and then give them extra abilities. And that, and, you know, it’s just a matter of limiting how far we can go with this, right? Because we do not want the case where it’s an open… play field, and anybody can do anything. So in one hand, you have to regulate the companies who are producing this thing. The other hand, you also have the government also has to regulate itself to prevent itself from, you know, using, you know, this as a, as a weapon rather than as a tool, right? Because if you take a knife, a knife in the kitchen is used for cutting vegetables, but when you put the knife into the hands of a killer, you know, it becomes a dangerous tool. So the technology in of itself is not dangerous, but once it ends up with in the hands of people with malicious intent, it can, you know, you know, have that dangerous elements to it. So I think that the co-regulation of both governments at the national level, that’s countries themselves, and then also international bodies that they are part of, like, for example, the UN, the ITU, and all these international regulatory bodies, it’s up to them to be able to co-regulate themselves, and then also the governments that form out of them, to be able to make sure that the technology does not go out of hand. Thank you.

Alina Ustinova: Gabriella, you wanted to add?

Gabriella Marcelja: Just very quick, but I just want to draw a parallel here, because I think what your question is asking, who will live and who will die, you know, who is deciding on this, I think, you know, this is already happening. So if you think of just the line in hospitals to get a body transplant, right? So it’s the same thing right now. So you have a line, and you have a line, so you need to wait. Whether you’re, you know, a billionaire, or whether you have nothing, you have a line. And so this is a little bit of the body transplant parallel that I think it’s very, very similar. So it’s just a matter of augmentation, maybe fixing a problem rather than having a new organ transplanted. Because, of course, from how I see it, we will have, I don’t want to say which sector is going to come up with this, but I already can imagine, we will have fake implants. So we will have people that will do this and sell you, like in the gray economy, hey, can you sell me like, today is something else, and tomorrow will be the same product, but in this gray economy, it will happen. It’s like inevitable. So people will try to copy, people will try to, you know, do business in a way, and they will produce harm. So it’s inevitable for this to happen. And in any case, it’s like a line that, okay, you don’t have money for like the good quality products, no problem. You have, you know, a cheap version that, yeah, maybe you will die tomorrow, but hey, you know, this is your chance of getting augmented. So I think this is going to happen, like if we go into this sector here, because as with body transplants today, yes, you have a line, but, you know, my specialization is in criminal law. And so I do understand the underneath unfortunate situation that we have. And we do have, you know, criminal organizations and et cetera, who do work on skipping the line, if you will, right. So this will happen. And I think it’s already happening, you know, in the system we have now with the problems currently that we have now. So it’s just going to be, you know, the problem is going to be augmented, let’s put it that way.

Alina Ustinova: Lev, you wanted to add something?

Lev Pistrenin: Yes, thank you. I’d like to add about control. I think that you are totally right. Now we see that private companies have a lot of control on our phones, on our devices, and sometimes this control is much more stronger than governments could provide. So I think in this case it should be a balance for control and control should be from all sides, all participants of this process. Private companies first, government the second, and people the third. As it was said yesterday on panel discussion, only knowledge is a power and they could help us to survive in the future. So I think we should study about new technologies and we should be aware of how to control and manage them.

Alina Ustinova: Do we have any questions now? You have? Can you please give the mic to the person? Thank you. Really interesting topics.

Audience: Given the work I do, I think often of the risks and the challenges for controlling these types of technologies as well as harnessing their power. So I guess my question is, do we feel that there is enough understanding of the technology to be able to create committees, organizations, frameworks to make sure we can make the most of them? of what IOB can enable while still protecting the individuals who ultimately will bear the risks of the implant and as well as the benefits but so do you know what I mean I think I want to understand do we have the knowledge and capacity to as an international community harness the benefits as well as make sure we are able to understand and control the risks of the technology

Alina Ustinova: basically asking do we know do we know do we know enough to so not make no harm so from does anyone has left please

James Nathan Adjartey Amattey: maybe I can answer that a bit yes so I think it depends on where you’re coming from so for example I live in Ghana I’m from Africa where we are mostly when it comes to IOB we are mostly consumers you know most of the wearables we get are imported and sometimes it’s it’s seen as a lifestyle thing where for example there’s this new trend of BBLs that is seen as a lifestyle thing but people do not totally understand you know the implications technological the people don’t understand where this is coming from and where this can lead right so I think we need to position or to put more effort in community awareness raising awareness education on you know what what is truly at stake here and how best we can move forward because like like Anna said some of these things are inevitable they will definitely happen it’s just a matter of are we ready for it, you know, when it happens, or are we going to allow it to overtake us? And now, you know, we have to play catch up on, you know, things like regulation, things like trying to control the spread and the production of these things. You know, much like social media, we relaxed a bit and then we had to play catch up. I think a lot of that is also can be seen in the AI space. So we want to be able to make sure that we are ahead of the trend, which is very difficult to do, I must say. But, you know, we have to start from somewhere. We have to hope that we stay ahead because once this overtakes us, it’s actually part of the human body. It was very difficult to get rid of, you know, just by a policy or a regulation.

Alina Ustinova: Thank you. James, if you want to add something.

Lev Pistrenin: Thank you. Is it okay? I think now it’s not possible to have all knowledge of all the world because there are many different specialties. And, for example, of artificial intelligence. So, if doctors use artificial intelligence, only them could not be confident that artificial intelligence is a safety. So, and, for example, engineers, they also could not be confident that artificial intelligence is safety because it is multidisciplinary. projects and technologies. I think if we want to have some control from our side, from people, the future is for multidisciplinary teams, groups, committees, or even it’s very good to have some friends with knowledges in different spheres. And so communication is our key opportunity to survive.

Alina Ustinova: Thank you very much. I guess we’re out of time. So thank you for a wonderful discussion. I guess we covered lots of aspects. So if you want to change context with the listeners, please welcome. Thank you, our online speakers as well, who joined us today. And have a good time here for the first. Thank you very much. Thank you very much.

L

Lev Pestrenin

Speech speed

87 words per minute

Speech length

1282 words

Speech time

879 seconds

AI and IoB can improve healthcare quality and access

Explanation

Lev Pestrenin argues that artificial intelligence and Internet of Bodies technologies can enhance the quality of healthcare and improve access to medical services. He suggests that these technologies can assist doctors in diagnostics and treatment, leading to better patient outcomes.

Evidence

Example of AI implementation in radiology in Moscow, where AI processes images to help radiologists make diagnoses faster and more accurately.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Igor Sergeyev

Gabriella Marcelja

Agreed on

IoB technologies can improve healthcare

IoB raises issues of human autonomy and decision-making

Explanation

Lev Pestrenin highlights the potential impact of IoB technologies on human autonomy and decision-making. He suggests that as these technologies become more integrated into our bodies and lives, questions arise about who ultimately controls the devices and the data they generate.

Major Discussion Point

Ethical and social implications of IoB

Balancing innovation and risk mitigation for IoB is challenging

Explanation

Pestrenin acknowledges the difficulty in balancing the potential benefits of IoB innovations with the need to mitigate associated risks. He emphasizes the importance of multidisciplinary approaches to address these challenges effectively.

Evidence

Mentions the need for multidisciplinary teams and committees to ensure comprehensive understanding and control of IoB technologies.

Major Discussion Point

Future development and regulation of IoB technologies

I

Igor Sergeyev

Speech speed

93 words per minute

Speech length

386 words

Speech time

248 seconds

IoB devices enable remote patient monitoring and early disease detection

Explanation

Igor Sergeyev contends that Internet of Bodies devices allow for remote monitoring of patients’ vital signs and health indicators. This enables early detection of diseases and health issues, particularly beneficial for patients in remote areas with limited access to healthcare facilities.

Evidence

Mentions various devices such as ECG recorders, glucose and cholesterol level analyzers that can be used for remote patient monitoring.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Lev Pestrenin

Gabriella Marcelja

Agreed on

IoB technologies can improve healthcare

G

Gabriella Marcelja

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

IoB implants and wearables can enhance medical diagnostics and treatment

Explanation

Gabriella Marcelja argues that IoB implants and wearable devices can significantly improve medical diagnostics and treatment. These technologies can provide real-time health data, enabling more accurate and timely medical interventions.

Evidence

Mentions neural implants and biometric prosthetics as examples of IoB technologies that can enhance medical capabilities.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Lev Pistrenin

Igor Sergeyev

Agreed on

IoB technologies can improve healthcare

IoB may widen socioeconomic gaps and create “augmented elites”

Explanation

Marcelja raises concerns that IoB technologies could exacerbate existing socioeconomic inequalities. She suggests that access to advanced IoB enhancements might be limited to wealthy individuals, creating a divide between ‘augmented elites’ and those without such enhancements.

Evidence

Discusses the potential for a ‘tiered future’ where enhanced individuals have advantages in intelligence, strength, and health over unaugmented individuals.

Major Discussion Point

Ethical and social implications of IoB

IoB technologies pose risks of surveillance and privacy violations

Explanation

Marcelja highlights the potential for IoB devices to be used as surveillance tools, posing significant risks to privacy. She argues that without proper regulation, these technologies could enable unauthorized tracking and data collection.

Evidence

Mentions smart lenses as an example of IoB technology that could record everything a person sees, potentially leading to privacy violations and surveillance risks.

Major Discussion Point

Cybersecurity risks of IoB technologies

Agreed with

Irina Pantina

Agreed on

IoB technologies pose cybersecurity risks

I

Irina Pantina

Speech speed

107 words per minute

Speech length

1250 words

Speech time

698 seconds

IoB devices are vulnerable to hacking and data breaches

Explanation

Irina Pantina emphasizes that IoB devices are susceptible to hacking and data breaches. She argues that these vulnerabilities could lead to serious consequences, including potential harm to users’ health and well-being.

Evidence

Cites examples of potential attacks on medical devices like insulin pumps and pacemakers, which could have life-threatening consequences if hacked.

Major Discussion Point

Cybersecurity risks of IoB technologies

Agreed with

Gabriella Marcelja

Agreed on

IoB technologies pose cybersecurity risks

Cybersecurity measures must be implemented to protect IoB users

Explanation

Pantina stresses the importance of implementing robust cybersecurity measures to protect IoB users. She argues that a multi-stakeholder approach involving producers, cybersecurity providers, and governments is necessary to ensure the safety and security of IoB technologies.

Evidence

Suggests practices such as rigorous testing of devices, development of specific security solutions, and establishment of regulatory frameworks.

Major Discussion Point

Cybersecurity risks of IoB technologies

Differed with

James Nathan Adjartey Amattey

Differed on

Approach to regulating IoB technologies

Multi-stakeholder cooperation is needed to govern IoB responsibly

Explanation

Pantina advocates for a collaborative approach to governing IoB technologies. She argues that cooperation between various stakeholders, including producers, cybersecurity providers, and governments, is crucial for responsible development and implementation of IoB.

Evidence

Outlines specific responsibilities for different stakeholders, such as producers implementing cybersecurity measures, cybersecurity providers developing specialized solutions, and governments establishing regulatory frameworks.

Major Discussion Point

Future development and regulation of IoB technologies

J

James Nathan Adjartey Amattey

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Regulation is needed to ensure equal access and prevent misuse of IoB

Explanation

James Nathan Adjartey Amattey argues for the necessity of regulation to ensure equitable access to IoB technologies and prevent their misuse. He emphasizes the importance of balancing innovation with risk mitigation to protect users and society at large.

Evidence

Discusses the potential for IoB technologies to be used as weapons or tools for manipulation if not properly regulated.

Major Discussion Point

Ethical and social implications of IoB

Differed with

Irina Pantina

Differed on

Approach to regulating IoB technologies

Public awareness and education about IoB is crucial

Explanation

Amattey stresses the importance of raising public awareness and providing education about IoB technologies. He argues that this is essential for informed decision-making and responsible use of these technologies.

Evidence

Cites examples from Ghana where imported wearables are seen as lifestyle products without full understanding of their implications.

Major Discussion Point

Future development and regulation of IoB technologies

Agreements

Agreement Points

IoB technologies can improve healthcare

Lev Pestrenin

Igor Sergeyev

Gabriella Marcelja

AI and IoB can improve healthcare quality and access

IoB devices enable remote patient monitoring and early disease detection

IoB implants and wearables can enhance medical diagnostics and treatment

The speakers agree that Internet of Bodies technologies have significant potential to enhance healthcare through improved diagnostics, remote monitoring, and better access to medical services.

IoB technologies pose cybersecurity risks

Irina Pantina

Gabriella Marcelja

IoB devices are vulnerable to hacking and data breaches

IoB technologies pose risks of surveillance and privacy violations

Both speakers highlight the cybersecurity vulnerabilities associated with IoB technologies, emphasizing the risks of hacking, data breaches, and privacy violations.

Similar Viewpoints

Both speakers emphasize the need for robust regulatory frameworks and security measures to protect users and ensure responsible development of IoB technologies.

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

Both speakers highlight the importance of addressing potential social inequalities and raising public awareness about the implications of IoB technologies.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Unexpected Consensus

Multi-stakeholder approach to IoB governance

Irina Pantina

Lev Pestrenin

James Nathan Adjartey Amattey

Multi-stakeholder cooperation is needed to govern IoB responsibly

Balancing innovation and risk mitigation for IoB is challenging

Regulation is needed to ensure equal access and prevent misuse of IoB

Despite coming from different backgrounds (cybersecurity, healthcare, and regional perspective), these speakers unexpectedly agree on the need for a collaborative, multi-stakeholder approach to governing IoB technologies, balancing innovation with risk mitigation.

Overall Assessment

Summary

The main areas of agreement include the potential of IoB to improve healthcare, the need for robust cybersecurity measures, the importance of regulation and multi-stakeholder governance, and the necessity of addressing potential social implications.

Consensus level

There is a moderate level of consensus among the speakers, particularly on the need for responsible development and governance of IoB technologies. This consensus suggests a shared recognition of both the potential benefits and risks associated with IoB, implying a cautious but optimistic approach to future development and implementation of these technologies.

Differences

Different Viewpoints

Approach to regulating IoB technologies

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

While both speakers agree on the need for regulation, Pantina emphasizes cybersecurity measures and a multi-stakeholder approach, while Amattey focuses more on ensuring equal access and preventing misuse through government regulation.

Unexpected Differences

Perspective on the inevitability of IoB adoption

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

While not explicitly stated as a disagreement, there seems to be an unexpected difference in perspective on the inevitability of IoB adoption. Marcelja appears to view the adoption of IoB as somewhat inevitable, focusing on its potential consequences, while Amattey emphasizes the need for education and awareness, implying that adoption can be more controlled or guided.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating IoB technologies, the balance between innovation and risk mitigation, and the potential social implications of IoB adoption.

difference_level

The level of disagreement among the speakers appears to be moderate. While there is general consensus on the potential benefits and risks of IoB technologies, speakers differ in their emphasis on specific aspects and proposed solutions. These differences reflect the complex and multifaceted nature of IoB technologies, highlighting the need for interdisciplinary approaches and continued dialogue to address the challenges and opportunities presented by IoB.

Partial Agreements

Partial Agreements

Both speakers recognize the potential social implications of IoB technologies, but they propose different solutions. Marcelja highlights the risk of widening socioeconomic gaps, while Amattey emphasizes the need for public awareness and education to address these issues.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Similar Viewpoints

Both speakers emphasize the need for robust regulatory frameworks and security measures to protect users and ensure responsible development of IoB technologies.

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

Both speakers highlight the importance of addressing potential social inequalities and raising public awareness about the implications of IoB technologies.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

Now we see that artificial intelligence, like Internet of Bodies, it’s our possible future, which could help us to live longer and to be healthier during our life.

speaker

Lev Pestrenin

reason

This comment frames AI and IoB technologies in a positive light as tools for improving human health and longevity, setting an optimistic tone for the discussion.

impact

It prompted further exploration of the potential benefits and risks of these technologies in healthcare and beyond.

Sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer.

speaker

Alina Ustinova

reason

This insight highlights a key security concern with IoB devices, framing humans with implants as potentially hackable systems.

impact

It shifted the discussion towards cybersecurity risks and ethical concerns around IoB technologies.

So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies.

speaker

Irina Pantina

reason

This comment emphasizes the need for regulatory frameworks and accessibility considerations for IoB technologies.

impact

It broadened the conversation to include policy and equity issues surrounding IoB adoption.

So wealthy nations will adopt augmentation tech faster. So we will see a marginalized group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient.

speaker

Gabriella Marcelja

reason

This insight raises important concerns about global inequality in access to IoB technologies.

impact

It prompted discussion of the potential for IoB to exacerbate existing social and economic divides.

Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that?

speaker

James Nathan Adjartey Amattey

reason

This comment highlights unique challenges of IoB devices compared to external technologies, particularly around updates and fixes.

impact

It deepened the discussion on technical and ethical challenges specific to internal IoB devices.

Overall Assessment

These key comments shaped the discussion by highlighting both the potential benefits and significant risks of Internet of Bodies technologies. The conversation evolved from initial optimism about health benefits to deeper exploration of cybersecurity concerns, regulatory needs, global inequality issues, and unique technical challenges of internal devices. This progression reflected a nuanced, multi-faceted examination of IoB’s implications for individuals and society.

Follow-up Questions

How will the integration of Internet of Bodies devices affect human evolution?

speaker

Alina Ustinova

explanation

This question explores the long-term implications of IoB technology on human biology and society.

How can we manage cybersecurity risks related to Internet of Bodies devices?

speaker

Alina Ustinova

explanation

This addresses the critical need for protecting individuals from potential hacking or unauthorized access to their implanted devices.

How can we prevent segregation or inequality caused by different levels of access to IoB devices?

speaker

Alina Ustinova

explanation

This question raises concerns about potential social and economic divides created by advanced medical technologies.

How can a person be offline if a device implanted in them keeps them constantly online?

speaker

Alina Ustinova

explanation

This explores the implications of constant connectivity on privacy and personal autonomy.

How can we ensure ethical approval, strict licensing, and penalties for misuse of IoB devices?

speaker

Gabriella Marcelja

explanation

This area of research is crucial for developing comprehensive governance strategies for IoB technologies.

How can we create a balance between re-enabling differently abled people through IoB devices and maintaining their independence?

speaker

James Nathan Adjartey Amattey

explanation

This question addresses the ethical considerations of enhancing human capabilities while preserving individual autonomy.

Will Internet of Bodies technologies eventually lead to private companies regulating who lives and dies based on implant access?

speaker

Alina Ustinova

explanation

This explores the potential shift in power dynamics between governments, companies, and individuals in healthcare decision-making.

Do we have enough understanding of IoB technologies to create effective frameworks for harnessing benefits while protecting individuals?

speaker

Audience member

explanation

This question addresses the readiness of the international community to regulate and manage IoB technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di

Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di

Session at a Glance

Summary

This discussion focused on the importance of citizen data in fostering inclusive digital environments and promoting human rights. Experts from various fields shared insights on engaging marginalized communities in data governance and digital transformation processes.

The panel emphasized the need for meaningful citizen participation across the entire data value chain to ensure digital systems and policies reflect diverse experiences and priorities. They highlighted tools and frameworks, such as the Copenhagen Framework and the Digital Rights Check, designed to promote responsible use of citizen-generated data and assess human rights risks in digital projects.

Participants stressed the importance of involving women, girls, persons with disabilities, and other marginalized groups in data governance practices. Examples were shared of how citizen data initiatives have improved accessibility and representation for these communities in digital spaces and policy-making processes.

The discussion also touched on the role of national statistical offices in integrating citizen-generated data into official data systems, particularly for addressing data gaps in areas like gender-based violence and disaster impact assessment. Ethical concerns and potential risks associated with data collection and sharing were addressed, with emphasis on the need for safeguards and standards.

Collaboration between civil society, government bodies, human rights institutions, and the private sector was identified as crucial for maximizing the impact of citizen data initiatives. The panel concluded by highlighting the need for increased investment in inclusive citizen participation in digital spaces and the importance of capturing and communicating the real-world impact of citizen data projects to garner further support and resources.

Keypoints

Major discussion points:

– The importance of citizen-generated data for inclusive digital transformation

– Tools and frameworks for ensuring human rights and ethics in digital data collection

– Involving marginalized groups like women, persons with disabilities, and children in data governance

– Challenges and opportunities for national statistical offices to incorporate citizen data

– The role of public-private partnerships in advancing citizen data initiatives

The overall purpose of the discussion was to explore how citizen-generated data can foster inclusion and human rights in public services and policies in the digital age. The panelists shared experiences and insights on involving citizens, especially marginalized groups, in data production and governance processes.

The tone of the discussion was collaborative and solution-oriented. Panelists spoke enthusiastically about the potential of citizen data while also acknowledging challenges. There was a sense of urgency about the need to make digital spaces and data processes more inclusive. The tone remained positive and constructive throughout, with panelists building on each other’s points and offering concrete suggestions for advancing the field of citizen data.

Speakers

– Papa Seck: Chief of the Research and Data Section at UN Women, member of the Steering Committee of the Collaborative on Citizen Data

– Dr. Hem Raj Regmi: Deputy Statistician in the Nepal National Statistical Office

– Joseph Hassine: AI for Social Good, Google.org

– Line Gamrath Rasmussen: Senior Advisor on Human Rights and Tech at the Danish Institute for Human Rights

– Bonnita Nyamwire: Research Director at Pollicy

– Elizabeth Lockwood: Representative of the UN Stakeholder Group of Persons with Disabilities for Sustainable Development, member of the Steering Committee of the Collaborative on Citizen Data

Additional speakers:

– Howie Chen: Works at the UN Statistics Division

– Dina: Audience member from Brazil

Full session report

Expanded Summary of Discussion on Citizen Data for Inclusive Digital Transformation

This discussion brought together experts from various fields to explore the importance of citizen-generated data in fostering inclusive digital environments and promoting human rights. The panel focused on engaging marginalized communities in data governance and digital transformation processes, emphasizing the need for meaningful citizen participation across the entire data value chain.

Opening Statements and Key Themes:

1. Importance of Citizen Data for Inclusive Digital Transformation

Panelists strongly agreed on the critical role of citizen-generated data in creating inclusive digital environments. Papa Seck, Chief of the Research and Data Section at UN Women, emphasized that citizen data is essential for fostering inclusive digital spaces. Bonnita Nyamwire, Research Director at PoliSea, argued that citizen data helps identify and address systemic biases, highlighting her work with women politicians and women in media through PoliSea’s programs. Joseph Hassine from Google.org stressed its importance for accurately understanding world challenges and informed policymaking. Elizabeth Lockwood, representing the UN Stakeholder Group of Persons with Disabilities, highlighted how citizen data can fill critical gaps in information about marginalized groups, citing an example of how such data helped address barriers for persons with disabilities during the COVID-19 pandemic. Dr. Hem Raj Regmi, Deputy Statistician in the Nepal National Statistical Office, viewed citizen data as an alternative source for areas with data gaps, noting its potential cost-effectiveness and representativeness compared to traditional data sources.

2. Ensuring Human Rights and Ethical Standards in Citizen Data

Line Gamrath Rasmussen, Senior Advisor at the Danish Institute for Human Rights, stressed the importance of a human rights-based approach throughout the data collection process. She introduced tools such as the Copenhagen Framework and the Digital Rights Check, designed to promote responsible use of citizen-generated data and assess human rights risks in digital projects. Elizabeth Lockwood highlighted the importance of data confidentiality and protection, while also discussing the Digital Accessibility Rights Evaluation Index, which assesses digital accessibility across countries. Joseph Hassine addressed the need to consider potential harms from data sharing and misuse, emphasizing the importance of data validation and standardization.

3. Meaningful Participation of Marginalized Groups

Panelists agreed on the crucial importance of involving marginalized groups in data governance and collection processes. Bonnita Nyamwire emphasized the need to involve women and girls in data governance practices. Elizabeth Lockwood stressed the importance of ensuring accessibility for persons with disabilities, arguing that organizations of persons with disabilities should lead or co-lead citizen data initiatives. Dr. Hem Raj Regmi highlighted the focus on marginalized communities and population groups in Nepal’s efforts to incorporate citizen data, mentioning Nepal’s new statistics act from 2022 and plans to implement citizen-generated data for violence and disaster impact measurement.

4. Collaboration and Capacity Building

The discussion underscored the importance of cross-sector collaboration and capacity building to advance citizen data initiatives. Bonnita Nyamwire emphasized the need for collaboration between government, civil society, and the private sector. Elizabeth Lockwood called for strengthening capacity for inclusive citizen data, while Dr. Hem Raj Regmi suggested starting with small-scale pilots at municipal or district levels before national implementation. Joseph Hassine stressed the importance of capturing and communicating the impact of citizen data initiatives to attract more resources and support, highlighting Google.org’s focus on data for informed policymaking and more accurate/inclusive AI tooling.

5. Challenges and Opportunities for National Statistical Offices

Dr. Hem Raj Regmi provided insights into the challenges and opportunities for national statistical offices in incorporating citizen data, highlighting its potential as a cost-effective and potentially more representative alternative to traditional data sources.

6. Role of Public-Private Partnerships

Joseph Hassine provided perspectives on how private sector entities like Google.org can contribute to AI for social good and support citizen data projects.

Closing Recommendations:

In a final “lightning round,” panelists offered key recommendations:

– Line Gamrath Rasmussen: Promote and encourage adoption of the Copenhagen Framework on citizen data

– Bonnita Nyamwire: Develop and promote tools to ensure compliance with ethical and human rights standards in data collection

– Elizabeth Lockwood: Strengthen capacity and increase investments in inclusive citizen participation in digital spaces

– Dr. Hem Raj Regmi: Engage in monitoring and implementation of the Global Digital Compact

– Joseph Hassine: Capture and communicate the impact of citizen data initiatives to attract more resources and support

Unresolved Issues and Audience Questions:

Several issues remained unresolved, including strategies for including children and older adults in data initiatives, balancing data confidentiality with openness and accessibility, and addressing the digital divide to ensure offline alternatives for data participation.

Closing Remarks:

Howie Chen, representing the UN Statistics Division, provided closing remarks, emphasizing the importance of the discussion in advancing inclusive digital transformation.

The overall tone of the discussion was collaborative and solution-oriented, with panelists building on each other’s points and offering concrete suggestions for advancing the field of citizen data. The discussion concluded by highlighting the need for increased investment in inclusive citizen participation in digital spaces and the importance of capturing and communicating the real-world impact of citizen data projects to garner further support and resources.

Session Transcript

Papa Seck: Papasek and I’m the Chief of the Research and Data Section at UN Women. I’m also wearing another hat today as a member of the Steering Committee of the Collaborative on Citizen Data. The Collaborative was established last year to foster partnerships, build capacity and promote the responsible use of citizen-generated data for inclusive and sustainable development. UN Women was the inaugural chair of the Collaborative, together with the UN Statistics Division, and we’re really happy to organise this session today. Colleagues, in the digital era, the participation of citizens in data-driven processes is essential for fostering inclusive, equitable digital environments that can serve the diverse needs of all communities and community members. Since Sunday, day zero of this forum, we’ve heard repeatedly the reasons why the data that feeds AI algorithms, for example, needs to be representative, but also needs to be scrutinised, and this absolutely necessitates citizens’ inclusion and engagement. The newly adopted Global Digital Compact considers inclusivity as a cornerstone for a fair and equitable digital future. Meaningfully engaging all citizens in the data production and use is important to ensure that digital systems and policies reflect their unique experiences and priorities, and this paves the way, of course, for more inclusive digital transformation. If we put that another way, there will be no transformation without inclusion and diversity, and this panel today really reflects that. So we have distinguished speakers from civil society, the Human Rights Institute, the National Statistical Office and the private sector. who will share with us their experiences today on the ways citizen data, the citizen data movement can really help foster inclusion and human rights in public services and policies in the digital age. So the session will also explore how the recently launched UN collaborative on citizen data and the Copenhagen framework on citizen data could support marginalized individuals and communities in this endeavor. So we have five distinguished speakers today. We have Lynn Garaf Rasmussen, who’s the senior advisor on human rights and tech at the Danish Institute for Human Rights. Joining me in the room is Bonita Namwari, who’s the research director of policy. And online again, we have Elisabeth Lockwood, who’s the representative of the UN, of the stakeholder group of persons with disabilities for sustainable development. We have Dr. Hemraj Regmi, who’s the deputy statistician in the Nepal National Statistical Office. And last but not least, Mr. Joseph Hassan, who’s at google.org and is AI for social goods at google.org. So let me first start with, Lynne, let me start with you. We will start with a round of just two questions, one question for each of the panelists. And what I ask you is, can you really please briefly describe for us your experience on citizen data, human rights and digital transformation? How does this come across in your work? It will be the same question for all of you, and I kindly ask each of you to stick to the time allocated of two minutes. So Lynne, let me start with you. Okay, so while we sort that out, Bonita, I’ll go to you then. Thank you so much.

Bonnita Nyamwire: So at PoliSea, where I work, we are based in Kampala in Uganda, and our team is spread across the African continent. So our work is deeply rooted in empowering citizens through data. We’ve led initiatives that leverage citizen-generated data to advocate for improved digital services, inclusivity, and accountability. And by integrating data-driven approaches, we’ve explored the intersection of human rights and digital transformation, ensuring that marginalized voices, especially women and girls, are heard in policy-making processes. So our work mostly focuses on improving digital technology services for women and girls across the African continent. And so a significant focus of our work has been ensuring that digital transformation does not exacerbate inequalities, especially gender inequalities, but rather create opportunities for more inclusive and equitable societies. This therefore includes extensive research that we have done as PoliSea in various African countries on issues like technology-facilitated gender-based violence and the need for safer digital ecosystems, and other research on citizen engagement in data governance processes, as well as application of gender data in data governance.

Papa Seck: Thank you. Papa, back to you. Great. Thank you very much, Bonita. Linek, should we try again? or whether you’ve been able to unmute. We still cannot hear you, but let me see if we could try to fix it with the technician in the room. So, in the meantime, let me go to you, Elizabeth. Sorry, Elizabeth, are you speaking? I am. Now I can unmute. Apologies. I couldn’t unmute.

Elizabeth Lockwood: Thank you so much. Apologies. Thank you so much, Papa. I, along with Papa, I’m also one of the Steering Committee members of the Collaborative on Citizen Data, so I’ll speak to that and then I’ll speak to my other role. One of the key outputs of the Collaborative is our Copenhagen Framework, which highlights the importance of meaningful citizen participation across the entire data value chain. It outlines principles for citizen data and the necessary enabling environment. And a central objective of the framework is to integrate citizens’ perspectives into broader discussions with the national data ecosystem, including topics such as digital transformation, artificial intelligence, and data governance. The human rights-based approach of data is also central to the framework, ensuring open, transparent, inclusive, participatory, confidential, ethical, and other approaches. As the stakeholder group of persons with disabilities, we really focus on data led by organizations of persons with disabilities, citizen data in particular, to fill critical data gaps, to provide evidence, to influence policies, and ensure data reflect reality. This includes data to address the increasing digital divide that disproportionately affects persons with disabilities who live in, 80% live in the Global South, and of that group, 90% do not have sufficient access to assistive technologies that they require. And I’ll talk about that a bit more. Thank you, Papa.

Papa Seck: Great. Thank you very much, Elizabeth. Lina, shall we try again? Yes, I hope you can hear me now. Yes, now it’s good. Thank you. Perfect. Thank you so much, and thank you for waiting.

Line Gamrath Rasmussen: Yeah, so working in this space of technology and human rights, there’s always this dual relationship where tech creates these wonderful opportunities for promoting and protecting human rights, but it also creates, can cause serious harm to human rights. So at the Danish Institute, we obviously take a human rights-based approach, so that means we work on the responsibilities of states and businesses to use and deploy technology in a way that’s human rights compliant. But I find that sometimes we forget ourselves as human rights professionals that when we use technology, we also have to think about how we make sure that it’s happening in a way that is not causing harm to human rights. So that’s why we’re working with this human rights impact assessment methodology where you actually try and assess the risks and the impacts that the technology will have on the users involved in your projects. And that’s why we made a few toolkits on human rights impact assessment in the digital space, and also a tool called the Digital Rights Check that we’ll talk about a little bit more later. Thank you. Great. Thank you very much.

Papa Seck: Over to you, Dr. Hem.

Dr. Hem Raj Regmi: Thank you, Chair, distinguished delegates, ladies and gentlemen. Good evening to everyone. Yes, we all know that these data are required for almost everyone from individuals to the governments, policy makers to the decision makers, researchers to the business people. We also know that these data do not come automatically. We need to invest a lot in the production of the data. Some data can be produced with a little bit less efforts like managing administrative data or MIS, management information system. But the data outside the system are quite costly, particularly the censuses or surveys or even the studies. Even the governments like Nepal have not been able to fund sufficiently to produce sufficient amount of the data to monitor national plans as well as to report for the SDG indicators which are the international obligations agreed by the governments. So we are looking for the alternative data sources which may be a little bit cost effective, which may be more reliable, which may be more representative. These non-traditional data sources may be different types, for example, big data may be there supported by the AI and then all these computers and then with high velocities. But the capacity is always limited to manage this big data. That is why the best alternative data source that we assume may be the citizen generated data. Given these constraints, the citizen generated data may be the best alternatives for the data production, for the data users, not only for the governments, even for the broad population. data user society that is why we have collaborated with the UN citizen data, data generated group to focus to produce some data not all maybe some data particularly on the social side for example the violence, the household level violence, even the gender based violence which are quite common then to report that type of the information or maybe the disaster related information if disaster occurs in any place for example Nepal is prone to many types of the disasters and in that situation if we can develop some mechanism to report these data from the citizens or the issues that we are discussing over here like human rights violation or maybe the issues related with the human rights then these data sources may be invaluable for the governments even for the national statistics office so that we can claim that we have been able to provide the data to the data users, the national data users, the global data users as well as the those who are living behind as per the theme of the SDG that is why we are collaborating with the citizen data forum or the trying to advocate this Copenhagen framework a little bit issues are already there in the statistics sector of Nepal so yes I can go later on Papa thank you yeah exactly thank you very much sorry we’re a little bit limited in time so but I’ll come back to you with the second question so last but not least over to you Joseph.

Joseph Hassine: Thank you, thanks for having me today I’m grateful to participate and so in my role at google.org I look at this from the lens of a funder and external partner where from Google’s philanthropy I’m responsible for helping non-profit organizations and civil society leverage AI and technology toward digital transformation and ultimately toward more positive societal outcomes. Specifically, my team funds nonprofits to build with AI in fields like health, where AI can accelerate diagnostics, education, where it can be a supportive tool for teachers, or food security, where AI can help predict famine to enable earlier, more effective response, just as a few examples. All of this work requires data, data that is accurate, accessible, inclusive, and thus we also fund work to encourage a more open data ecosystem as a whole, such as through our partnership with the UN statistics division to build UN data, which is a tool that uses Google’s data commons to build an open source platform for understanding vast amounts of UN data in a single interface with natural language search functionality. So these types of projects are really all towards that goal of a more open data ecosystem that provides us a more clear and accurate understanding of the world and the challenges we’re facing. So I’m looking forward to talking more about about this work and how it connects to the work that some others have mentioned by the collaborative on citizen data as well. Thank you for having me. Great.

Papa Seck: Thank you. Thank you very much, Joseph. And thanks to all of you. I think, you know, again, each of you has highlighted really one area, just as examples of, you know, why the work on citizen data is really so, so rich. So for the second round of questions, I’ll have specific questions for each of you. And in five minutes, Lena, can you can you tell us a little bit more about, you know, deeper about what what you’ve shared with us? You’ve touched upon some of the tools that the Danish human rights institution offers to institute offers to prevent and mitigate human rights risks related to digital projects and solutions in the citizen data context. Can your tools be used to assess whether data collection uses mobile using mobile apps or the development of platforms to showcase data? are in risk of violating human rights. Over to you. Yes, sorry for that small recess.

Line Gamrath Rasmussen: Yeah, definitely, indeed. We do develop these tools. I shared in the chat a link to what we call the digital rights check. It’s something we developed together with the GI set to really have a tool for digital for development projects where they could, you know, where the staff involved in these projects could have a sort of risk management guide for what will be the human rights risk or impacts of this particular project. So the tool gives you several entry points. You can enter as technical development corporation, as an investor or others, and it allows you to… It’s an online questionnaire that helps you identify these potential issues and some corresponding actions that can help you rectify or mitigate the risks that you identify. So it will help the users consider technology-specific risks, application-specific risks. So what kind of technology are you using? Is it AI? Is it cloud services, et cetera? And then also the context-specific risk as relates to, for example, data protection regulation in the context or country where you are deploying this technology. It also, as a human rights-based tool, has attention to vulnerable and marginalized groups and makes you consider also accessibility issues. So it really gets you around the whole process of identifying the different risks and also makes you consider stakeholder engagement and how you think about transparency and accountability. It also gives you these case studies and further readings that you can link to and then in the end you get this final results page with the risk identified and a sort of human rights action plan that you can use to follow up and really react on the risk that you have identified. This is an open source tool that is open to everybody so if anybody would like to adapt it to their own context, their own projects that’s perfectly feasible and in the true spirit of privacy by design the data is also not stored so it will be deleted as soon as you finish the questionnaire. So I would welcome you to explore it and also give us feedback if there’s anything you would like to see in the tool and I hope it can be useful for many of your projects. Thank you.

Papa Seck: Thank you very much and thank you also for sharing the tool and I’ve looked at it myself and it’s really I think an excellent product so I encourage all of you to do so. So now over to you Bonita. You’ve extensively researched the intersection of women, girls and technology. Could you please share your insights on why it is essential to involve women and girls in data governance practices? In what way can they be meaningfully involved?

Bonnita Nyamwire: Thank you so much Papa. So involving women and girls in data governance is essential because their perspectives, their needs, their experiences are oftentimes overlooked in decision making and yet they are very important. And so as policy our research that we have conducted with women and girls shows for instance that they are disproportionately affected by issues like online gender-based violence or technology-facilitated GBV. data privacy breaches and other kinds of online injustices and discrimination. So we’ve done a research and looked at women in politics, women in the media, women human rights defenders across several countries on the African continent. And so why they need to be involved in these processes is one, involved in game, it then helps to identify and address systemic biases, ensuring fairer systems and policies, including their perspectives also ensures that they are unique challenges such as the ones I’ve already mentioned, TFGBV, algorithmic discrimination are not overlooked but rather prioritized in governance practices as well as safeguards developed for them. And then involving women and girls also ensures fair representation in the digital age because as data becomes central to governance and development, excluding women and girls perpetuates their marginalization. Their inclusion therefore is vital for achieving gender parity in leadership, in decision-making roles in the digital era, but also to achieve on sustainable development goal five. And so how do we involve the women and girls in a meaningful way? So meaningful participation for women and girls can take various forms, ranging from involving them in participatory workshops where women could design as data governance, stakeholders on data governance policies, ensuring that solutions are truly reflective of their realities. And we have done this as policy with women politicians, with women. women in the media on our different programs. We have a program for women politicians called Vote Women that we have implemented in Uganda, in Tanzania and in Senegal. And we have seen their involvement help to, you know, protect them online, but also improve their wellbeing in digital spaces. We also have had another program, which is still running, Future of Work for Women in the Media. Where we are also creating their resilience in online platforms, but also to see that their voices are amplified. Then the other one on meaningful involvement of women is need to invest in digital literacy programs to improve their digital skills. Because our research also has shown, different researches we’ve done, that women lack digital skills. So involving them through capacity building, especially on digital literacy and skills building initiatives, will enable their participation in any online programs. Then also the other one is to ensure that their representation in data policy boards and leadership and decision-making bodies is also improved, which will further ensure that they are key stakeholders in decision-making processes, whether at community level, national or global level. So it is important that women and girls are also involved in the decision-making processes at different levels. Then the other one is to create safe spaces for dialogue, where their voices can influence both national and global data governance policies. And these are some of the spaces for dialogue, like here where we are at IGF and several others. And so, you know, and we have seen involvement of women in these spaces actually improve their participation and amplifying their voices in data governance. And then meaningful involvement also requires removing barriers to participation by embedding gender and intersectionality lenses in every stage of policy development and digital transformation. Very important is this point on embedding intersectional lens at every stage of data governance process so that no one is left behind. Then lastly, it is also important to foster collaboration and accountability in the tech ecosystem to prioritize the needs and rights of women and girls. And so this will involve collaboration with the women and girls rights networks and organizations, as well as government departments that work on gender issues. Thank you very much.

Papa Seck: Great. Thank you very much, Bonita. At UN Women, and this was tasked by the UN Statistical Commission to several agencies recently, we are developing a new framework for the measurement of technology-facilitated violence. And I’m increasingly convinced that citizen data has to be central to this because it takes so many different forms. And I think at this IGF, we’ve heard several examples of that where measurement becomes really tricky if you don’t have inclusion. So thank you very much. And I will definitely be looking at some of the tools and some of the work that you’ve done in this area, because I think this again has to be part of our global efforts to develop meaningful measurement of this phenomenon. So Elizabeth, we turn to you. With your work on disability statistics, could you give us an example on how citizen data help to ensure that the digital tools are inclusive?

Elizabeth Lockwood: Yes, thank you, Papa. I have three brief examples from partners. First, during the COVID pandemic, we had very little data on persons with disabilities and their experiences during the pandemic globally. So, as a result, NGOs, Organizations of Persons with Disabilities gathered data themselves using citizen data to understand the barriers and solutions for persons with disabilities. And the findings that the stakeholder group of persons with disabilities collected indicated that persons with disabilities face barriers in accessing digital technology in many vital areas, and this was critical for their survival in many cases, actually. And this included lack of access to fast internet connection, lack of financial means to purchase data packages for devices, and lack of captions and sign language interpretation for those daily news briefings that we had. And what happened, since governments really weren’t supporting this, organizations of persons with disabilities came in and they supported their members, they shared the information, and they advocated to their governments. And in the case of deaf organizations, captions were added, national sign language was added in many cases, and still continue today in emergency settings. This isn’t universal, and there are still many hurdles to jump for this, but it’s important to recognize this. And then another thing that’s interesting is digital platforms such as Zoom increased for persons with disabilities in terms of accessibility, but when the pandemic lessened, this actually got worse, because it wasn’t a priority for the general population. So it’s an interesting point to add. My second example is a large-scale example. It’s called the Digital Accessibility Rights Evaluation Index. It’s a benchmarking tool developed by the Global Initiative for Inclusive Technologies for advocates, governments, civil society, and others to trace ICT that’s accessible in different countries around the world. The data collection is based on a set of questionnaires and was done in cooperation with Disabled Peoples International and along with other organizations of persons with disabilities. It has been documented in 137 countries in 8 regions in the world, representing 90% of the world. Their findings from 2018 and most recently 2020. And if you look at the index score, the very nice platform online, you can see global and regional ranking, peer economic development group ranking and implementation ranking. And for many countries, you can compare 2018 to 2020. Most countries have improved, but not all. And so that’s something I really recommend you look at. And then my final example is from the European Blind Union that has done significant advocacy around accessible voting. Feedback was collected from blind and partially sighted members, individuals on their barriers using digital voting systems and election materials. So this turned into advocacy campaigns. And as a result, this advocacy has included improved compatibility with screen readers and other accessibility enhancements in voting in the European region. And in closing, it’s only by ensuring that organizations of persons with disabilities are leading or co-leading citizen data initiatives that digital tools will be inclusive, reflecting the reality and needs of the communities themselves. Thank you.

Papa Seck: Thank you very much, Elizabeth. Great work. I’ve followed this and I think you and I have also had conversations on some of the work that we are doing, because I think, you know, in the space of statistics, there’s still more that can be done. particularly on disability. And we’ve had several of these conversations and I think the collaborative is definitely well-placed to enrich that work. So, Dr. Hem, over to you.

Dr. Hem Raj Regmi: You are considering, as you mentioned, the use of digital tools to collect data on gender-based violence, as an example. Could you please let us know how you plan to engage with the communities to ensure that the tools are inclusive? But also, I would add, in this space, obviously there are lots of ethical concerns when it comes to collection of data on gender-based violence and how do you aim to address those concerns? Thank you, Papa, once again. The statistical system in most of the countries has been governed by the fundamental principle of official statistics since last many years. The statistics act, rules and regulations, they are particularly based on those principles which were promulgated by the UN last 20, 30 years ago. Now the situation is getting changed and then we need to change our statistical system to include other dimensions, particularly the citizen-generated data, for example. Luckily, in Nepal, we promulgated a new statistics act in 2022, almost two years ago, and then there are a few provisions where we can use such type of the data as an alternative data source, though it’s not yet streamlined, but it can be an alternative, what we are producing right now by different censuses, surveys, and studies. There is a provision of the data, like survey clearance system, called survey clearance system, which is almost similar to that Copenhagen framework where the different modalities have been suggested to produce the data by the… led by the CSOs, the civil society organizations or in collaboration with the NSO and CSO or even led by the NSO, these formalities are over there, these frameworks are over there and similar, not exactly similar, but up to similar modalities are there in the Statistics Act of Nepal that was promulgated in 2022. We plan to use, we plan to use those modalities to generate the data in the future. For example, the data of the marginalized people, the data for the sector where there is a data gap exists. If there is a data gap and then the national level surveys or the studies, they are not able to produce the data at that level for those marginalized people that we are discussing over here. So, for those areas which are quite remote or for that particular segment of the society which are marginalized, then in that situation, we plan to use that Copenhagen framework to produce the data of the different sectors. And then recently, we did one national level workshop to implement this citizen-generated data and we have decided these two areas that I mentioned in my previous statement also. One is on the violence, particularly the gender-based violence, the domestic violence and the other one is the disaster, particularly the impact of the disaster, not on the hazards and risks. When that disaster occurs, then the economic impact or the impact on the livelihood and the impact on the life of the people or even crop livestock or infrastructures, these two areas may be like piloting on the implementation of that Copenhagen framework in Nepal. If the confidence, if the reliability of the data can be raised with these pilot activities, then we hope that in the future, we can take these citizen-generated data as an alternative data source. which may be the major part of the official data system. Like the Census Survey and officially MIS has been the major part of the official data system right now. The citizen-generated data may be the another source of the data systems, which can help us particularly for the marginalized people, for the women, children, displaced people, people affected by the disasters. Yes, that’s the plan. Great. Thank you. Thank you very much.

Papa Seck: I really look forward to seeing this work. I think maybe one piece of advice I can also offer is that there’s a lot of work on citizen data and violence that has been done in Ghana in particular. I would definitely advise you to also talk to them and also learn about their experiences. I’d be happy to make the connection. Joseph, again, all our gratitude for the strong support to the work of the collaborative on citizen data. My question to you is, what motivated you to invest in this space? What social challenges in the context of technological innovation do you hope to address through your investments in citizen data? Thank you. We’re thrilled to be able to support, so grateful to be part of this work in a small way.

Joseph Hassine: I can zoom out a little bit. Google.org’s work on data has historically focused on two things, data that enables more informed policymaking or decision-making, and data that enables more accurate and inclusive AI tooling, sometimes both. The funds that we provide might be used to build a new data analysis platform, create advocacy tools that allow folks to communicate with policymakers on critical issues. or collect data in different languages and contexts in order to improve the accessibility of AI models. Citizen-generated data in particular is foundational to all of these efforts because at the end of the day it helps give us a more inclusive and accurate picture of the world around us, which is critical for any tooling that is built upon that data. At the same time, the data is only helpful if it’s used to inform some action through new understanding of a community or a problem and in order to make data useful, from my perspective, there’s some amount of validation and standardization that’s critical in order to ensure that these individual small or large-scale citizen data efforts are seen as trustworthy and usable and ultimately able to spur positive change. I think it’s not dissimilar to the digital rights check, for example, that a colleague mentioned of needing kind of an expert in this space to create a set of standards that others can follow when building so that we can look at data that’s produced by these entities and know that it meets some amount of benchmarks. And so that type of work is part of what makes us excited to support the collaborative on citizen data because I think the collaborative is filling that critical role through the Copenhagen framework, through pilot efforts to ensure that there’s reliability in this citizen data ecosystem that continues to grow. So we view that kind of central leadership as really critical to build capacity and create standards in the space and thrilled to be able to support the collaborative to continue to build upon that. Great, thank you very much and again, thanks for your strong support.

Papa Seck: So we still have some time and I would like to now open up for questions, both from those in the room, but also online. Yes, please and please introduce yourself. Hello, everyone. I’m Dina. I’m from Brazil. And first of all, thank you for this amazing meeting and other information shared. And my question is, how can data-driven human rights initiatives also include children and older adults? Because it was mentioned about women and people with disabilities. But I would like to also know about these initiatives. And it would be amazing if you can also mention inspired examples regarding these communities.

Joseph Hassine: Thank you. I also had a question. Thank you so much. Thank you so much for that discussion.

Audince: That was actually really helpful. So I had a slightly different question. When you build talking about having really high quality, good value data sets that are used for better policymaking and you can use for citizens’ good, there’s also a lot of, I think, harm you can do with these kind of data sets. Because when you have these data sets and you’re essentially maybe sharing them with the aim of doing public good, if you’re sharing them widely, or actually if it’s anywhere out that’s accessible, there’s nothing to stop maybe a company or any other actor from using this data to exploit, say, certain kinds of societal problems or any sort of a divide to make things worse. You already have political consultancies that do this when it comes to elections. So sharing public data, yes, it could lead to public good, but there’s also, I think, potential for public harm, especially now with so many AI systems being deployed. So how would you prevent against that? How would you ensure that all of this data collection is only being used for good? Thank you. Um, how are you? Do we have any questions online? Not at this moment. Great.

Papa Seck: Thanks. Um, uh, does any of you want to take the Do any of you want to take the questions that were asked? I can go on. Yes, please. Go ahead. Yeah, I can go on the Children one. So I know that one is, um, that is what I talked about using the intersectional lens

Bonnita Nyamwire: in citizens gathering citizens data. Because when you do things using the intersectional lens, then you’re able to see who has been left out, who has been included, even the different categories. Because if you’re focusing on Children, they also have different categories, you know, and the same applies to women and girls and also the adults that you talked about. I know that there are some organizations that are doing work on Children being online. I know that Plan International is doing work on that. They have done research on, uh, gender on on on cyberbullying online for young girls. I know that UNICEF is also doing a lot of work on that one. I know that also Child Fund, the different offices of Child Fund Globally, they’re also doing work around Children’s rights online. Yes, thank you. Great. Thank

Papa Seck: you. Um, anyone else for the second question on the harm? I can go ahead. This is Elizabeth. Yes, please go ahead. They just just briefly in the Copenhagen framework, we do have principles

Elizabeth Lockwood: that guard human rights. So it is the central core theme of the framework. So this can be something that can be used to apply to address the very good question that was asked in the audience. And I also think that it’s important that this, that we need to be, data need to be confidential and protected, but also in other cases, be open and accessible. So I think we also need that balance. And it’s really important to look at that balance and monitor and retain that balance. Thank you. Thank you.

Papa Seck: And Dina, just to add to your question, as part of the collaborative, we have various organizations working on different issues. So obviously, gender, women and girls is an important dimension of it, but it’s not the only one. There are other organizations that are working on various different dimensions. And I think that’s really what makes also the collaborative really rich. Papa, if I may, I would just add. Yes, please go ahead Joseph.

Joseph Hassine: Just one point to the second question. I think there’s some organizations doing really interesting work in ensuring that data collection efforts are equitable and fair. I think in particular areas that I’ve seen, such as for example, indigenous language, these are areas where indigenous communities can benefit from leveraging AI tools, but they’re often not available in native languages. At the same time, AI could be a tool for preserving native language, many of which are unfortunately becoming extinct in the US and elsewhere. But of course that has really meaningful risks associated with ownership of that data, whether these communities want their data ingested by systems. So I think these questions take place at a large scale, but also at a more issue specific scale, where it’s how do we make the data appropriate and safe in this particular instance? And I’ve spoken to some organizations who are, for example, leading efforts on, if you’re collecting indigenous data, here’s kind of a charter and a constitution of how that data can be used, how communities should be compensated, how they should be included in the process. and what ownership they should have of data moving forward. And I think work like that is really critical to that second question of how we avoid some of the risks here. Great, thank you. Thank you very much, Joseph. So we have time for just one final lightning round for all the panelists.

Papa Seck: And just I think in no more than a minute, what would you advise the collaborative to do to enrich its work in the digital space? Just a piece of advice for the collaborative that we can take into consideration.

Line Gamrath Rasmussen: So let’s start with Lene. Yes, thank you. One piece of advice, that’s hard, but I think one thing we have to realize is not just consider the end result. So we might have great citizen data that’s inclusive and that really reflects the society we’re in, but we also have to get it in a way, the process has to be human rights-based as well. And that means that we have to think about the human rights-based principles of participation, accountability, non-discrimination, empowerment, and legality. And also remember that this human rights due diligence, if you wanna call it, is ongoing. It’s not something you do once and then you’re sorted and then the rest you do for the next five years is fine. You have to keep doing and keep assessing who might be harmed by the products or services that you’re using and how you’re using. And then one final thing is maybe also this thing about considering offline alternatives as maybe the only option for some people to participate in and be empowered by this, that we cannot just, even if we have like universal access, even if it’s affordable, there will be people that we cannot reach online. So we have to think about and be serious about offline alternatives as well. Thank you. Yeah, thank you. The point about the digital divide is actually, I think, quite central to I think all of the discussions in order to make sure that we don’t leave anyone behind. Bonita, can I go to you?

Bonnita Nyamwire: Thank you, Papa. So for me, my final, I think, one piece of advice would be that all key stakeholders within doing work in the digital ecosystem, looking at all these issues, should not work in silos. They should work together, because you find government is doing this, the same thing civil society is also doing, the same thing that private sector is also doing, but instead would all work together, leverage on each other’s efforts, leverage on the structures and the systems that each of those stakeholders have to be able to address some of these issues. For instance, the safety issues that one of the participants has asked about, you know, it would be looking at what does government have, what does the tech companies, what does civil society have, and then working together to sort most of these issues, and then also to ensure that we do not leave anyone behind, because working in silos, we may forget some people, but working together, we will not leave anyone behind, plus all the categories that have been talked about, the women, the girls, the children, the adults, the persons with disabilities. Thank you. Great, thank you very much. Elizabeth, over to you.

Elizabeth Lockwood: Thank you, Papa. I think the collaborative needs to be in the conversation. We need to be part of this work, and meaningfully part of this work. One way is to engage in the monitoring and implementation of the Global Digital Compact. I think that’s a very, very good way that we can really be part of this as a collaborative. strengthen the capacity and invest in inclusive citizen data and participation in the digital

Papa Seck: space, especially for marginalized groups which we’ve been talking about. And I also echo that we should work cohesively and collaboratively instead of in silos. Thank you. Great, thank you very much. Dr. Ham. Thank you, Papa. Yes, according to my understanding, the citizen data are not

Dr. Hem Raj Regmi: going to replace the existing official data quite immediately, but in the future we can think about it. So, the focus should be on those areas where there is a data gap. Since COVID, the data collection system for the official data has already been changed. We have already transferred from PAPI, the paper-based approach, to the CAPI, computer-based approach, and even the telephone service has become quite common where we can link those information with the digital world. My idea is that let’s start from a small area, maybe from a municipality or with a small theme, or maybe for a district or maximum for a province. Let’s not focus at the national level immediately that citizen-generated data can fill the gap at the national level. Let’s start from the municipality or let’s start from some marginalized areas, from marginalized communities, from some societal segments of particular caste, ethnicity, or women, children, or elderly. Then we can integrate this data with the national data system. Then they will be available digitally and then in different forms. Yes, that’s my opinion.

Papa Seck: Great. Thank you very much, Dr. Hamm. Joseph, over to you. Thank you. I think I view this through a lens of how do we get more resources to the collaborative?

Joseph Hassine: And to that end, I think the work that’s been done already to pilot some of these efforts is critical. And a critical next step that we often miss in the data space is like, how do we capture the impact that that is having and tell that story? Because I think that we’ll ultimately need those throughlines of what data an organization created, what decision that ultimately led to, and what impact that ultimately had on a community of people. And the better that we can get at capturing and telling those stories, the more we’ll be able to find support for this work and continue to scale it.

Papa Seck: Great. Thank you very much. And this wasn’t part of the plan, but I will make it as a prerogative to as a chair’s prerogative. So I would like to put on the spot, Ms. Howie Chen, who’s at the UN Statistics Division and who’s really been, I think, at the forefront of this work and including driving this session. So I don’t think we can close without giving you the floor, Howie. Please, over to you.

Speaker: Thank you, Papa, for putting me on the spot. No, it’s really great session. Thank you so much, first of all, Papa, for being such a great moderator and to all the speakers. And great to see you over there, Bonita, and all the speakers. And without all the help from you, that would not be possible. Thanks to Francesca and her team to make it happen, the interpretations. And of course, the collaborative is a collaborative. We all work together. So really grateful for all the great work that we’ve done together. And thank you, Papa, for co-leading the collaborative with us. And we look forward to continue the conversation. Great. Thank you very much, Howie. And really, again, thanks to all of you for a rich conversation.

Papa Seck: There are many takeaways and, you know, obviously I won’t summarize, but I think, you know, for me, just some of the key points that came up really are around meaningful citizen participation in the data value chain and ensuring that digital systems and policies address the diverse needs of marginalized and underrepresented communities. We also need to foster inclusivity and equity in the digital era. Human data initiatives, as we’ve heard, such as the Copenhagen Framework, are vital for integrating citizens’ perspectives into digital transformation. We also need them for promoting transparency and safeguarding human rights in data governance processes. We’ve heard, you know, really, I think, you know, how and seen also how partnerships between civil society, national statistical offices, human rights institutes and institutions, academia and the private sector can really help to identify, amplify the effectiveness of citizen data in creating innovative policy and solutions, policy solutions. And here, you know, again, I think given the richness of the work in this area, I think, you know, the sky is really the limit. In terms of action points, three of them I noted. One is promoting and encouraging the adoption and implementation of the Copenhagen Framework and citizen data to ensure meaningful citizen participation and integration of marginalized communities in data governance. We need to develop and promote tools such as those that were highlighted here today and include really relevant stakeholders to ensure both compliance with ethical and human rights standards, but also data standards as well. We need to strengthen capacity and increase also increase investments in inclusive citizen participation in data. digital spaces. And you know, again, this needs to be, we need to ensure that marginalized communities and population groups are also included. So with that, we’ll close here. And thank you very much to all of you for a great session. you you you you you

P

Papa Seck

Speech speed

131 words per minute

Speech length

2013 words

Speech time

921 seconds

Citizen data essential for fostering inclusive digital environments

Explanation

Papa Seck emphasizes the importance of citizen data in creating inclusive digital environments. He argues that citizen participation in data-driven processes is crucial for addressing diverse community needs.

Evidence

Mentions the Global Digital Compact considering inclusivity as a cornerstone for a fair and equitable digital future.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

B

Bonnita Nyamwire

Speech speed

128 words per minute

Speech length

1185 words

Speech time

554 seconds

Citizen data helps identify and address systemic biases

Explanation

Bonnita Nyamwire argues that involving women and girls in data governance is essential to identify and address systemic biases. This ensures fairer systems and policies that reflect their unique challenges and experiences.

Evidence

Mentions research conducted on women in politics, media, and human rights defenders across African countries, highlighting issues like online gender-based violence and technology-facilitated GBV.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Involving women and girls in data governance practices

Explanation

Nyamwire emphasizes the importance of involving women and girls in data governance to ensure their perspectives and needs are not overlooked. She argues that their inclusion is vital for achieving gender parity in leadership and decision-making roles in the digital era.

Evidence

Mentions programs like Vote Women and Future of Work for Women in the Media, implemented in various African countries to improve women’s wellbeing in digital spaces.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Elizabeth Lockwood

Dr. Hem Raj Regmi

Agreed on

Meaningful participation of marginalized groups

Including children and older adults in data initiatives

Explanation

Nyamwire highlights the importance of using an intersectional lens in citizen data gathering to include different categories of people, including children and older adults. She emphasizes the need to consider various demographics in data initiatives.

Evidence

Mentions organizations like Plan International, UNICEF, and Child Fund working on children’s rights online and cyberbullying for young girls.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Importance of cross-sector collaboration

Explanation

Nyamwire advises that all key stakeholders in the digital ecosystem should work together rather than in silos. She argues that collaboration between government, civil society, and private sector can leverage each other’s efforts and structures to address issues more effectively.

Major Discussion Point

Collaboration and Capacity Building

J

Joseph Hassine

Speech speed

162 words per minute

Speech length

973 words

Speech time

359 seconds

Citizen data critical for accurate understanding of world challenges

Explanation

Joseph Hassine emphasizes the importance of citizen-generated data in providing a more inclusive and accurate picture of the world. He argues that this data is foundational for building tools and informing policymaking.

Evidence

Mentions Google.org’s focus on data that enables more informed policymaking and decision-making, as well as data that enables more accurate and inclusive AI tooling.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Addressing potential harms from data sharing and misuse

Explanation

Hassine acknowledges the potential risks associated with data sharing and misuse. He emphasizes the need for careful consideration of data ownership and usage, especially in sensitive contexts like indigenous language preservation.

Evidence

Mentions organizations working on charters and constitutions for data use, compensation, and ownership when collecting indigenous data.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Line Gamrath Rasmussen

Elizabeth Lockwood

Agreed on

Ensuring human rights and ethical standards in citizen data

Capturing and communicating impact of citizen data initiatives

Explanation

Hassine advises focusing on capturing and communicating the impact of citizen data initiatives. He argues that demonstrating the real-world effects of data-driven decisions is crucial for garnering support and scaling these efforts.

Major Discussion Point

Collaboration and Capacity Building

Differed with

Dr. Hem Raj Regm

Differed on

Approach to implementing citizen data initiatives

L

Line Gamrath Rasmussen

Speech speed

154 words per minute

Speech length

831 words

Speech time

322 seconds

Tools needed to assess human rights risks in digital projects

Explanation

Line Gamrath Rasmussen emphasizes the need for tools to assess human rights risks in digital projects. She argues that these tools help identify potential issues and provide actions to mitigate risks in technology deployment.

Evidence

Mentions the Digital Rights Check tool developed by the Danish Institute for Human Rights, which helps identify human rights risks in digital projects.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Elizabeth Lockwood

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

Need for human rights-based approach in data collection process

Explanation

Rasmussen stresses the importance of a human rights-based approach throughout the data collection process. She argues that principles of participation, accountability, non-discrimination, empowerment, and legality should be considered continuously.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Elizabeth Lockwood

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

E

Elizabeth Lockwood

Speech speed

138 words per minute

Speech length

903 words

Speech time

389 seconds

Citizen data can fill critical data gaps on marginalized groups

Explanation

Elizabeth Lockwood argues that citizen data, particularly data led by organizations of persons with disabilities, is crucial for filling critical data gaps. This data provides evidence to influence policies and ensure data reflects reality for marginalized groups.

Evidence

Mentions examples of NGOs and Organizations of Persons with Disabilities gathering data during the COVID pandemic to understand barriers and solutions for persons with disabilities.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Ensuring accessibility for persons with disabilities

Explanation

Lockwood emphasizes the importance of ensuring digital tools and platforms are accessible for persons with disabilities. She argues that organizations of persons with disabilities should lead or co-lead citizen data initiatives to ensure inclusivity.

Evidence

Mentions examples like the Digital Accessibility Rights Evaluation Index and advocacy by the European Blind Union leading to improved accessibility in voting systems.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Bonnita Nyamwire

Dr. Hem Raj Regmi

Agreed on

Meaningful participation of marginalized groups

Importance of data confidentiality and protection

Explanation

Lockwood highlights the need for balance between data confidentiality and accessibility. She argues that while data needs to be protected, it should also be open and accessible in certain cases.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Line Gamrath Rasmussen

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

Need to strengthen capacity for inclusive citizen data

Explanation

Lockwood advises strengthening capacity and increasing investments in inclusive citizen participation in digital spaces. She emphasizes the importance of including marginalized communities and population groups in these efforts.

Major Discussion Point

Collaboration and Capacity Building

H

Dr. Hem Raj Regmi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Citizen data as alternative source for areas with data gaps

Explanation

Dr. Hem Raj Regmi proposes citizen data as an alternative source for areas with data gaps, particularly for marginalized communities. He argues that citizen-generated data can complement official data systems, especially in remote areas or for specific societal segments.

Evidence

Mentions Nepal’s new statistics act from 2022 which includes provisions for using citizen-generated data as an alternative data source.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Agreed on

Importance of citizen data for inclusive digital transformation

Focusing on marginalized communities and population groups

Explanation

Regmi emphasizes the importance of focusing citizen data efforts on marginalized communities and specific population groups. He argues that this approach can help fill data gaps for underrepresented segments of society.

Evidence

Mentions plans to use the Copenhagen framework to produce data on gender-based violence and disaster impacts in Nepal.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Bonnita Nyamwire

Elizabeth Lockwood

Agreed on

Meaningful participation of marginalized groups

Starting with small-scale pilots before national implementation

Explanation

Regmi advises starting with small-scale pilots of citizen data initiatives before national implementation. He suggests focusing on specific areas or themes at the municipal or district level to integrate citizen-generated data with the national data system gradually.

Major Discussion Point

Collaboration and Capacity Building

Differed with

Joseph Hassine

Differed on

Approach to implementing citizen data initiatives

Agreements

Agreement Points

Importance of citizen data for inclusive digital transformation

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regmi

Citizen data essential for fostering inclusive digital environments

Citizen data helps identify and address systemic biases

Citizen data critical for accurate understanding of world challenges

Citizen data can fill critical data gaps on marginalized groups

Citizen data as alternative source for areas with data gaps

All speakers emphasized the crucial role of citizen data in creating inclusive digital environments, addressing systemic biases, and filling data gaps, particularly for marginalized groups.

Meaningful participation of marginalized groups

Bonnita Nyamwire

Elizabeth Lockwood

Dr. Hem Raj Regmi

Involving women and girls in data governance practices

Ensuring accessibility for persons with disabilities

Focusing on marginalized communities and population groups

Speakers agreed on the importance of involving marginalized groups, including women, girls, persons with disabilities, and other underrepresented communities, in data governance and collection processes.

Ensuring human rights and ethical standards in citizen data

Line Gamrath Rasmussen

Elizabeth Lockwood

Joseph Hassine

Tools needed to assess human rights risks in digital projects

Need for human rights-based approach in data collection process

Importance of data confidentiality and protection

Addressing potential harms from data sharing and misuse

Speakers emphasized the need for tools and approaches to assess and mitigate human rights risks in digital projects, ensure ethical data collection processes, and address potential harms from data sharing and misuse.

Similar Viewpoints

Both speakers emphasized the importance of collaboration and effectively communicating the impact of citizen data initiatives to garner support and scale efforts.

Bonnita Nyamwire

Joseph Hassine

Importance of cross-sector collaboration

Capturing and communicating impact of citizen data initiatives

Both speakers advocated for building capacity and starting with smaller-scale initiatives before expanding to larger implementations of citizen data projects.

Elizabeth Lockwood

Dr. Hem Raj Regmi

Need to strengthen capacity for inclusive citizen data

Starting with small-scale pilots before national implementation

Unexpected Consensus

Inclusion of offline alternatives in digital initiatives

Line Gamrath Rasmussen

Need for human rights-based approach in data collection process

While most speakers focused on digital solutions, Rasmussen unexpectedly emphasized the importance of considering offline alternatives for those who cannot be reached online, highlighting a unique perspective on inclusivity.

Overall Assessment

Summary

The speakers largely agreed on the importance of citizen data for inclusive digital transformation, the need for meaningful participation of marginalized groups, and the importance of ensuring human rights and ethical standards in data collection and use.

Consensus level

High level of consensus among speakers, with strong agreement on core principles. This suggests a unified approach to promoting inclusive citizen data initiatives, which could lead to more effective implementation and policy development in this area.

Differences

Different Viewpoints

Approach to implementing citizen data initiatives

Dr. Hem Raj Regmi

Joseph Hassine

Starting with small-scale pilots before national implementation

Capturing and communicating impact of citizen data initiatives

Regmi advocates for starting with small-scale pilots at municipal or district levels, while Hassine emphasizes the importance of capturing and communicating the impact of initiatives to scale efforts.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were minor and primarily focused on implementation strategies rather than fundamental principles.

difference_level

The level of disagreement among speakers was low. Most speakers agreed on the importance of citizen data for inclusive digital transformation and the need to ensure human rights and ethical standards. The minor differences in approach do not significantly impact the overall consensus on the topic’s importance and general direction.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of protecting rights in data collection, but Rasmussen focuses on a continuous human rights-based approach throughout the process, while Lockwood emphasizes the need for balance between confidentiality and accessibility.

Line Gamrath Rasmussen

Elizabeth Lockwood

Need for human rights-based approach in data collection process

Importance of data confidentiality and protection

Similar Viewpoints

Both speakers emphasized the importance of collaboration and effectively communicating the impact of citizen data initiatives to garner support and scale efforts.

Bonnita Nyamwire

Joseph Hassine

Importance of cross-sector collaboration

Capturing and communicating impact of citizen data initiatives

Both speakers advocated for building capacity and starting with smaller-scale initiatives before expanding to larger implementations of citizen data projects.

Elizabeth Lockwood

Dr. Hem Raj Regmi

Need to strengthen capacity for inclusive citizen data

Starting with small-scale pilots before national implementation

Takeaways

Key Takeaways

Citizen data is essential for fostering inclusive digital environments and addressing systemic biases

Human rights and ethical standards must be ensured when collecting and using citizen data

Meaningful participation of marginalized groups (women, persons with disabilities, children, etc.) is crucial in data governance

Cross-sector collaboration and capacity building are needed to advance citizen data initiatives

The Copenhagen Framework provides important principles for citizen data collection and use

Resolutions and Action Items

Promote and encourage adoption of the Copenhagen Framework on citizen data

Develop and promote tools to ensure compliance with ethical and human rights standards in data collection

Strengthen capacity and increase investments in inclusive citizen participation in digital spaces

Engage in monitoring and implementation of the Global Digital Compact

Capture and communicate the impact of citizen data initiatives to attract more resources and support

Unresolved Issues

How to fully prevent potential harms from data sharing and misuse

Specific strategies for including children and older adults in data initiatives

How to balance data confidentiality/protection with openness and accessibility

Addressing the digital divide to ensure offline alternatives for data participation

Suggested Compromises

Start with small-scale pilots (e.g. at municipality level) before national implementation of citizen data initiatives

Focus citizen data efforts on areas with existing data gaps rather than replacing all official data sources

Develop charters or constitutions for data use when working with specific communities (e.g. indigenous groups) to address ownership and compensation concerns

Thought Provoking Comments

We are looking for the alternative data sources which may be a little bit cost effective, which may be more reliable, which may be more representative. These non-traditional data sources may be different types, for example, big data may be there supported by the AI and then all these computers and then with high velocities. But the capacity is always limited to manage this big data. That is why the best alternative data source that we assume may be the citizen generated data.

speaker

Dr. Hem Raj Regmi

reason

This comment introduces the idea of citizen-generated data as a cost-effective and potentially more representative alternative to traditional data sources, highlighting a key advantage of this approach.

impact

This set the stage for much of the subsequent discussion about the potential and challenges of citizen-generated data, framing it as a promising solution to data gaps.

Involving women and girls in data governance is essential because their perspectives, their needs, their experiences are oftentimes overlooked in decision making and yet they are very important.

speaker

Bonnita Nyamwire

reason

This comment highlights the importance of inclusivity in data governance, specifically focusing on the often-overlooked perspectives of women and girls.

impact

It shifted the conversation to focus more explicitly on issues of inclusivity and representation in data collection and governance, leading to further discussion of marginalized groups.

It’s only by ensuring that organizations of persons with disabilities are leading or co-leading citizen data initiatives that digital tools will be inclusive, reflecting the reality and needs of the communities themselves.

speaker

Elizabeth Lockwood

reason

This comment emphasizes the critical importance of having marginalized communities lead data initiatives about themselves, rather than just being subjects of data collection.

impact

It deepened the conversation about inclusivity by suggesting a more active role for marginalized communities in the data collection process, moving beyond just representation to leadership.

We might have great citizen data that’s inclusive and that really reflects the society we’re in, but we also have to get it in a way, the process has to be human rights-based as well. And that means that we have to think about the human rights-based principles of participation, accountability, non-discrimination, empowerment, and legality.

speaker

Line Gamrath Rasmussen

reason

This comment introduces the important perspective that the process of data collection itself must be rights-based, not just the end result.

impact

It added complexity to the discussion by highlighting that ethical considerations need to be integrated throughout the entire data collection process, not just in how the data is used.

Overall Assessment

These key comments shaped the discussion by progressively broadening and deepening the conversation around citizen-generated data. The discussion moved from identifying citizen data as a potential solution to data gaps, to exploring how to make such data truly inclusive and representative, to considering the ethical implications of the entire data collection process. This progression reflects a nuanced and multifaceted approach to the topic, considering practical, ethical, and rights-based perspectives.

Follow-up Questions

How can data-driven human rights initiatives include children and older adults?

speaker

Dina (audience member)

explanation

The discussion focused on women and people with disabilities, but including children and older adults is important for comprehensive human rights initiatives.

How can we prevent the misuse of public data sets by malicious actors?

speaker

Audience member

explanation

While data sets can be used for public good, there’s potential for exploitation. Safeguards are needed to ensure data is only used for beneficial purposes.

How can we balance the need for data confidentiality and protection with the need for open and accessible data?

speaker

Elizabeth Lockwood

explanation

This balance is crucial for maintaining data integrity while also ensuring its usefulness and accessibility.

How can we ensure equitable and fair data collection efforts, particularly for indigenous communities?

speaker

Joseph Hassine

explanation

There are unique challenges and risks associated with collecting data from indigenous communities, requiring special considerations for data ownership and use.

How can we develop offline alternatives for data collection to include those who cannot be reached online?

speaker

Line Gamrath Rasmussen

explanation

Even with universal access, some people may not be reachable online, making offline alternatives crucial for inclusive data collection.

How can different stakeholders in the digital ecosystem work together more effectively instead of in silos?

speaker

Bonnita Nyamwire

explanation

Collaboration between government, civil society, and private sector is necessary to address digital issues comprehensively and ensure no one is left behind.

How can the Collaborative on Citizen Data engage in monitoring and implementing the Global Digital Compact?

speaker

Elizabeth Lockwood

explanation

This engagement could be a significant way for the Collaborative to be meaningfully involved in shaping digital policies and practices.

How can we start implementing citizen-generated data at smaller scales (e.g., municipality level) before scaling to national levels?

speaker

Dr. Hem Raj Regmi

explanation

Starting small could be an effective way to integrate citizen-generated data into national data systems gradually.

How can we better capture and communicate the impact of citizen data initiatives?

speaker

Joseph Hassine

explanation

Demonstrating the real-world impact of citizen data projects is crucial for attracting more resources and support for this work.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #162 Overregulation: Balance Policy and Innovation in Technology

WS #162 Overregulation: Balance Policy and Innovation in Technology

Session at a Glance

Summary

This workshop focused on balancing AI regulation and innovation, exploring how to foster technological advancement while ensuring safety and ethical standards. Panelists from diverse backgrounds discussed various regulatory approaches to AI governance, including risk-based, human rights-based, principles-based, rules-based, and outcomes-based models. They emphasized the need for flexible, adaptable regulations that can keep pace with rapid technological changes.


Key issues addressed included the role of AI in combating child sexual abuse material (CSAM), the importance of human rights in AI governance, and the challenges of implementing AI in healthcare. Panelists stressed the need for context-specific regulations, noting that a one-size-fits-all approach may not be suitable across different regions and sectors.


The discussion highlighted the importance of public participation in developing AI policies and the need for capacity building and digital literacy. Panelists shared examples of how AI has been used innovatively during crises like the COVID-19 pandemic, demonstrating the potential benefits of flexible regulatory approaches.


The workshop also touched on the challenges of regulating AI without stifling innovation, with some arguing that policies might be preferable to strict regulations in certain cases. The importance of considering local needs and existing regulatory frameworks when developing AI governance strategies was emphasized.


Overall, the discussion underscored the complexity of AI regulation and the need for a balanced approach that protects human rights and public safety while allowing for technological progress and innovation.


Keypoints

Major discussion points:


– Balancing AI regulation and innovation


– Different approaches to AI governance (e.g. risk-based, human rights-based, principles-based)


– Challenges of regulating AI, including privacy concerns and potential misuse (e.g. for child exploitation)


– Need for AI literacy and capacity building, especially in developing countries


– Importance of considering local context when developing AI policies


Overall purpose:


The goal of this discussion was to explore how to effectively regulate AI technologies in a way that promotes innovation while also protecting public safety and ethical standards. The panelists aimed to share diverse perspectives on AI governance approaches from different regions and sectors.


Tone:


The overall tone was thoughtful and constructive. Panelists acknowledged the complexity of the issues and the need to balance different priorities. There was general agreement on the importance of regulation, but also caution about over-regulation stifling innovation. The tone remained analytical and solution-oriented throughout, with panelists offering nuanced views on different regulatory approaches.


Speakers

– Nicolas Fiumarelli: Moderator, represents the Latin American and Caribbean group from the technical community


– Natalie Tercova: Chair of the ICF in Czech Republic, member of ICANN, vice facilitator on the board of ISOC Youth Standing Group, PhD candidate focusing on digital skills of children and adolescents


– Paola Galvez: Tech policy consultant, founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, team leader at the Center for AI and Digital Policy


– Ananda Gautam: Represents the Youth Coalition on Internet Governance, global AI governance expert


– James Nathan Adjartey Amattey: From the private sector in Africa, focuses on innovation and impact on regulatory practices


– Osei Manu Kagyah: Online moderator


Additional speakers:


– Agustina: Audience member from Argentina


Full session report

AI Regulation and Innovation: Striking a Balance


This workshop explored the complex challenge of balancing AI regulation with innovation, bringing together experts from diverse backgrounds to discuss various approaches to AI governance. The discussion, moderated by Nicolas Fiumarelli, highlighted the need for flexible, adaptable regulations that can keep pace with rapid technological changes while ensuring safety and ethical standards.


Key Themes and Discussions


1. Approaches to AI Regulation


Paola Galvez, a tech policy consultant, stated that we are past the question of whether to regulate or not, and now the focus is on how to regulate. She outlined several regulatory approaches, including:


– Risk-based


– Human rights-based


– Principles-based


– Rules-based


– Outcomes-based


Galvez emphasized that regulation should not stifle innovation and stressed the importance of human rights-based approaches to AI governance, while acknowledging the implementation challenges these approaches face.


Ananda Gautam, representing the Youth Coalition on Internet Governance, advocated for flexible, principle-based approaches that can foster innovation while protecting rights. He offered a historical perspective, noting that if the internet had been heavily regulated in its early days, it might not have developed into the tool we use today.


Natalie Tercova, from the healthcare sector, proposed a risk-based approach, suggesting that high-risk AI applications should undergo rigorous review, while low-risk innovations could proceed under lighter regulatory requirements.


2. Balancing Innovation and Safety


James Nathan Adjartey Amattey, from the private sector in Africa, pointed out that the COVID-19 pandemic demonstrated the need for innovation over rigid regulation in times of crisis. He argued that certain regulatory frameworks are necessary for innovation to flourish, challenging the notion that regulation and innovation are inherently opposed.


3. Context-Specific Regulation


Panelists stressed the importance of developing context-appropriate AI governance, particularly for developing countries. Paola Galvez cautioned against simply copying EU regulations, arguing that local needs and existing regulatory frameworks should inform AI governance strategies.


4. AI Literacy and Capacity Building


James Nathan Adjartey Amattey highlighted the need for AI literacy programs for regulators, developers, and users to understand the risks and benefits of AI technologies. Paola Galvez emphasized that digital skills development is key to leveraging AI’s potential, particularly in developing countries.


5. Ethical Concerns and Human Rights


Natalie Tercova raised the issue of AI’s dual use in both creating and detecting child sexual abuse material (CSAM), highlighting the complex balance between leveraging AI for child protection and ensuring privacy rights. She discussed the challenges of using AI to detect CSAM while also acknowledging its potential role in generating such content.


6. Public Participation and Multi-stakeholder Collaboration


Panelists agreed on the importance of public participation in developing AI policies. Paola Galvez emphasized that multi-stakeholder collaboration is crucial for creating effective and inclusive AI governance frameworks.


Audience Questions and Responses


The session included a brief Q&A period, where audience members raised questions about:


– The role of AI in addressing climate change


– Strategies for promoting responsible AI development


– The potential for AI to exacerbate existing inequalities


Due to time constraints, not all questions could be addressed in depth, but panelists provided brief responses highlighting the need for continued research and dialogue on these topics.


Unresolved Issues and Future Directions


Several unresolved issues emerged from the discussion, including:


1. How to effectively balance privacy and safety in AI-powered content moderation


2. The extent of responsibility for AI developers and companies for the effects of their technologies


3. How to address the growing AI divide between developed and developing countries


The discussion highlighted the need for continued dialogue and collaboration to address these complex challenges.


Conclusion


The workshop concluded with a group photo of the panelists and moderator. Throughout the session, speakers emphasized the importance of flexible, context-specific governance strategies that can adapt to rapid technological changes and address the unique needs of different regions and sectors. The diverse perspectives shared by the panelists provided valuable insights into the ongoing challenges and opportunities in AI regulation and innovation.


Session Transcript

Nicolas Fiumarelli: worries. We invite everyone to sit at the main table if you want, so you can be more engaged in the session. Yes, we have one speaker that is stuck in the route, in the Uber, but we will start the session and then he will join later. So okay, good morning, afternoon, everyone, for the ones online. It’s a great pleasure to welcome you all to this workshop called Uber Regulation, Balancing Policy and Innovation in Technology, under the sub-theme of the Harnessing Innovation and Balancing Risk in the Digital Space. My name is Nicola Fiumarelli and I will be moderating the session. I represent the Latin American and Caribbean group from the technical community and it’s a privilege to be among such a distinguished group of panelists and participants. I am very glad that we have this quantity of people in the room. I think the session title is very interesting for you. You know, we are in an era when we need to decide if regulate or not regulate, so this is a hot topic nowadays. You know, technology and innovation have always been drivers of societal progress. However, the phase space evolution of the digital technologies, especially artificial intelligence, presents unique challenges. So how can we foster innovation without stifling it through over regulation? How do we ensure safety and the ethical standards while allowing technology to reach its full potential? These are some of the critical questions we are going to address today, but this requires collective deliberation, so you are all invited to give your ideas and today we aim to address them. So the session will be conducted, as you know, in this roundtable format to encourage equal participation and also interaction among our esteemed panelists and the audience. To set the stage, I just will briefly introduce our panelists. Following this, each of the panelists will take a moment to introduce themselves and share a first motivation for participating in this session. Afterwards, we will dive into the core discussion, addressing some of the key policy questions. Toward the end, we will open the floor for questions from the audience, both online and on-site here, moderated by our colleague, Jose. So let’s meet our panelists. First, Natalie, Natalie Terkova. She’s the chair of the ICF in Church Republic, also a member of the ICANN, and vice facilitator on the board of the ISOC Youth Standing Group. You know, the ISOC Youth Standing Group, together with the Youth Coalition and the Internet Governance, every year organize youth-led sessions to bring the young voices to the Internet Governance Forum. She is also a PhD candidate, focusing on the digital skills of children and adolescents, their online opportunities and risks, with the emphasis on online safety, online privacy, and AI. And she has recently contributed to a report on AI usage in medical spheres, exploring the challenges of deploying AI technologies in health care. Additionally, her works includes critical research on the role of AI in addressing child sexual abuse


Natalie Tercova: material, called CSAM. So Natalie, will you please introduce yourself further and share your motivation for joining this session? Thank you so much, Nicolas. Can anyone hear me well? Perfect. So as you said, thank you for summing it up so perfectly. I am representing the academia stakeholder group. I am a researcher. It’s my day job. And I was recently very much focused on the AI and how it can impact the critical topics I’m focusing on in my research, which on one side is the health system and, for instance, also finding health information online, how people trust health oriented information provided by, for instance, AI driven chatbots and so forth. And on the other side, I’m also invested in the topic of CSAM, as you mentioned. So harmful content focusing on children and also abuse of such materials, depicting children in intimate scenarios where AI is perceived more as a double edged sword. So I hope to tell more about this during the session as I feel this is a crucial topic. Thank you for having me.


Nicolas Fiumarelli: Thank you so much, Natalie. This is, as I say at the beginning, a hot topic, right? We are here to decide whether it is good to regulate or not regulate. There are several factors that will bring us to think about regulation. But on the other hand, you know, this could undermine human rights, like the access to information and different ones. So there are several ways to regulate. So we are looking forward to deep dive on these kind of things for maybe to have good outcomes and some key takeaways on how policymaker actually will have a solution for this kind of issues. Now I will introduce Paola, Paola Galvez. She is Peruvian and is a tech policy consultant dedicated to advancing ethical AI and human centric digital regulation globally. She holds a Master of Public Policy from the University of Oxford and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru and a team leader at the Center for AI and Digital Policy. Paola brings her unique perspective from her work at the OECD and UNESCO on international AI governments. You all must be hearing about UNESCO these days because every country in the world actually have the national AI strategies and UNESCO has the RAM that is the Readiness Assessment and drawing on her experience with UNESCO. AI RAM in Peru, she will provide some insights into this balance in regulatory safeguards and fostering innovation on the global scale. So Paola, could you introduce yourself and tell us about your motivation for this workshop?


Paola Galvez: Good morning, everyone. Thanks so much, Nicolás. Thank you all for joining this session. I think it’s really a critical discussion to be having, but I will put the different opinion here. I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right? Let me go one step back because you asked me to introduce myself a bit, and you did so well. Thanks so much, Nicolás. Just to give a bit of an overview, my perspective here comes with a background starting with the private sector. I used to work at Microsoft for almost five years, and it was me saying, innovation must come. Please do not prevent innovation in my country, which is a developing one. So advocating for auto-regulation, but that’s back 2013 when artificial intelligence was just starting in my country. I mean, in other countries, it was way developed, but the topic at the moment was cloud computing. So just to mention, that was my first version. Then I worked at the government. I advised the Secretary of Digital Transformation of Peru, and that was an absolute meaningful role because I contributed to the AI National Strategy, and I led the Digital Skills National Strategy. So that changed a bit, and I understood how it’s to work in the government, what are the challenges inside. I’m not saying good or justificate anything, but it happens. Then I paused my career and went to Oxford to study, and that’s what brought me to international organizations. And at the moment, I’m an independent consultant working for UNESCO and the OECD, contributing to my country because I just finished the UNESCO AI Readiness Assessment Methodology. I can tell more about it later. And also, I founded Idonia Lab, which is Spanish, which is Idonia, Idonia Lab, trying to make AI benefit everyone. through evidence-based digital regulation and capacity building, directed to women. Now going to the topic and what motivated me when Nicolás came with the idea of having this discussion, it was all in from the moment zero because I thought, yes, now that I’m in the global perspective, I live in Paris at the moment, and this question is not only happening in developing countries, even, I speak with a lot of startups in Paris, now they don’t know how to implement the EUAA Act, so it’s been crazy and there’s a lot of doubts. So I’d like to start and just leave it here, but by giving three key questions to start this conversation, I think first we need to think what is the public policy problem, what we want to regulate comes from that very first question. Regulation is needed to address public policy problems, fundamental and collective rights, so let’s find a more adequate solution by finding the problem, that’s the first step. Second, when to regulate? Map available regulatory instruments, because most of us have consumer codes or consumer law, not every country has data protection law, but that’s a good start, right? Intellectual property laws are in place, so let’s see what we have, and assess the feasibility of adopting this, because bringing to Congress, here we have a member, somebody working in the Parliament of Argentina in the table, for instance, it takes a lot of time, so if we can start enforcing the laws that we already have in place, that’s a good start. And the third question is how to regulate? Identifying a combination of AI regulatory approaches, I’m happy to tell you more about this, Nicolas, there’s several approaches to regulate AI at the moment, and there is no one single best one, but we can need to find according to our context. Thank you.


Nicolas Fiumarelli: Thank you, Paola, just summarising that. So, the first approach is what is the public policy problem, then is about when to regulate and at the end is how to regulate, okay. So, now I will introduce Ananda here on my left. He was stuck in the over in the traffic but he made it. Thank you Ananda to be with us. Ananda represents the Youth Coalition on the Internet Governance and is a global AI governance expert. He has extensive insights into the global regulatory impacts on technological innovation. So, Ananda, please share more about your work and what you bring today on our discussion.


Ananda Gautam: Thank you, Nicolas, and all the panelists. I’m so honored to be here with you guys, with some hiccups, so of course, anyway, so I mostly work with young people. I think capacity building of young people, helping people start their youth initiatives in their countries and how we bring young people to the global internet governance, and not only global, how do we engage in their capacity building in regional and national levels. So, that is my major focus right now. So, I am also working on different AI policies. Paula and myself did join the K-Dev together. I think Sabah is also here, Sabah was also part of our cohort. We have been learning about how developing AI landscape is affecting, and my perspective is very more concerned in the developing nations, because I come from the global south, and from the global south as well, Nepal itself is a very difficult area, and we have many challenges. We have outdated legislations, and some recent legislation like the EU-AI Act, which have extra territorial jurisdiction. It is not only applied in the EU region, so-called Brussels effect is affecting the legislation worldwide. So, countries like Nepal are also trying to build their own AI policies, but as I have been focusing on is, how do we build the capacity of those developing nations, so that they can build a comprehensive AI policy that will actually leverage the power of AI in developing those nations, and of course, how do we build our capacity of young people in engaging in AI governance processes. Another thing is, while we talk about digital divide, we have been seeing now AI divide, you know, like the people having access to AI, and then like people… not having access to AI so my focus would be on how do we build capacity of other stakeholders so that we can eliminate this divide so I think I’ll go on other things on second round. Thank you, Nicholas.


Nicolas Fiumarelli: Thank you, Ananda, and just summarizing that is a great issue to address on the developing nations right because there are difficult areas, as you mentioned, not every country is prepared or has different challenges while updating the legislation every legislation is different in every country. And in the, in the light of the AI Act, that is a mandatory thing and about these territorial jurisdictions right because, you know, the internet by nature is without frontiers. So, it’s very difficult sometimes to regulate this kind of things. In my opinion, as a technical and from the core of the internet, because you know IP addresses are not like country means so sometimes it’s not easy to, to see how, how, how a legislation from a country can affect the way we regulate on the internet because, as I said at the beginning the internet is for nature, a trans frontier, and as the excellency from Saudi Arabia say at the opening ceremony. And I mentioned we are on the AI divide now so this is a new concept that we need to take off and see how to leverage the power of AI as you say Ananda and how to about the young people that is using AI a lot, right. So, now I am going to introduce James, that is our speaker online. If the technicals can put James at the screen will be great. James comes from the private sector in Africa, where he focuses on innovation and the impact on regulatory practices. is going to share examples of how African innovations navigate regulatory challenges and three in the face of adversity. So James, you are with us there. Please introduce yourself for the people here on site. We have a full room. So share what drives your interest in this discussion, please.


James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. My name is James Amate. I am from Ghana. And I basically come from a background of products management, software development, where we’ve had to create both consumer and enterprise products for education and insurance and also in banking. And I’ve realized in that space that innovation does not have to happen in a vacuum. There are certain regulatory frameworks that need to happen for innovation to come to the forefront. Now, my friend Adan was stuck in traffic. He was complaining about Uber. And Uber is one of those innovations that came about through, should I say, the advancement of technology. And one of those advancements is something we call GPS tracing or GPS tracking. Without that GPS tracking and without that being embedded in phones, we would not have something called Uber. And without internet, a platform like Uber will not be able to locate our friend and get him to the site. And these are some of the things that we tend to. lose sight of sometimes because when we try and build software and we are trying to do digital transformation we think, oh, it’s all about doing customer research and doing market research, but in all there are certain societal and regulatory things that need to trigger innovation to happen. So for example, I just gave an example of. I just give an example for Uber. Now there are certain examples around regulations so when we look at the internet for example, how it came about was the decoupling of AT&T. And that led to the widespread infrastructure, infrastructure development that brought forth the internet. Right so without that, we would not have things like broadband, you know, it’s not directly correlated but the breaking down the regulation that led to the breaking down of AT&T especially in the United States is what, you know, sort of brought forth advancements in the development of the internet as we know it today. Now there are certain times when, when policy tries to get in the way of innovation. And I would use one of our local laws, as an example, we call it the eLevy is a very notorious regulation that came to bring a tax on. Should I say financial transactions done online. Right. Now the problem with that was that Ghana was at the beginning of digital financial literacy so a lot of people were now beginning to transact online. So, the law in of itself was not bad. but I think the timing and then the implementation of it did not have a lot of, should I say, public approval. And, you know, according to reports, there were times when the government of Ghana, you know, missed revenue targets, you know, from taxation, and also the utilization of mobile money also reduced because of the tariffs. So sometimes, you know, regulation, you know, may have a good idea, good intention, but sometimes it’s the implementation of the regulation that makes it difficult for innovation to come forth. So I think as much as regulation is important, we also have to look at the timing of the regulation, especially in Africa where we are mostly now catching up to a lot of innovation. We do not have a lot of homegrown built solutions, so most of the solutions we use are imported. So we have to take our time with regulation and to make sure that there is enough understanding and there’s enough appreciation, and there’s enough, should I say, uses or use cases for the technology before we try to regulate it. Now, I do understand that, you know, sometimes the timing of regulation is important. So, and as much as we do not want any regulations, we also do not want to list regulations where it’s very difficult for, where the technology runs ahead of the society and it’s very difficult for us to control it. So thank you very much for the floor. Over to you, Nicolas. Thank you so much, James. So you also touched on this idea of regulatory frameworks. sometimes happen for innovation, right? But a population needs to be prepared. Innovation is not bad, as you say, but we need public approval. There are different cultures, there are different social at the countries. And while you mentioned that digital money reduce tariffs, on the other hand, that could be a difficult for someone that doesn’t know how to use the technology to use that money on the online, right? So the implementation of the regulation sometimes is a challenge and well, all the people to understand or have this digital financial literacy as you say this concept. So finally, we have Osei, Osei Manukagia here at the table. He’s our online moderator and is taking all the questions from the chat and he will also address questions or direct the people here on site to take the mic. So Osei will ensure this active participation for our visual audience and on site. So Osei, please introduce yourself and tell us about your role in the internet governance and in supporting this session.


Osei Manu Kagyah: Hello, thank you very much. This topic is such an important subject matter, balancing policy and innovation. I’ll be your online moderator and also help moderating on site. So if you have any question, you just raise your hand or if you are joining us online, you just have your question in the chat box. As a public interest technologist, this topic is very, very interesting to me. I love how you put it, how do we go about the regulation? The issue about regulation, I think we’ve moved past it. Is it a silver bullet? And if we are going about it, how do we approach it? The nuances we hope to delve. So we are very excited to join this conversation. Break all your inputs and we help dive deeper. Thank you very much. Over to you, Nicolas.


Nicolas Fiumarelli: Thank you so much, Osei. So we will now begin the core discussion here of today’s workshop. So focusing on critical policy question, each speaker will have approximately three to five minutes to respond the questions directed at them. So policy question number one, what specific regulations currently hinder the adoption of AI and how can this be reformed to promote technological advancement to ensure safety? Natalie, specifically for you, about your recent work on child sexual abuse material, focuses also on AI’s role in the creation of this material, right? So can you explain what CSAM is? Where do you see the need for regulation there?


Natalie Tercova: Thank you so much, Nicolas. So some of you may be joining also my lightning talk, which I delivered on the day one, if I’m not wrong. So I hope I will not repeat myself here. I feel like extending the discussions on CSAM and how it affects the youth, the survivors and also overall the society and the wellbeing of those involved is now at its next step when we talk about AI and how does this come into the place. And it’s deepening the crucial aspects of this issue that we are focusing on. So first, let me just start by saying that what it is. Usually we talk about child pornography or these types of terms are more known to us. So right now we are focusing more on this shortcut CSAM, which stands for child sexual abuse material because it is more broad and it allows us also to involve and it’s something that can be manipulated through, for instance, technologies using AI models and so forth. Because recently, for instance, in Czech Republic, where I come from, we saw a big prevalence of images that were partially from already existing materials that were not harmful, were spread online, sometimes from the child themselves, sometimes from a caregiver, from a teacher who captured moments somewhere at the school property. However, then the AI stepped in or someone using the tool to make the person naked, for instance. And suddenly such existing, already existing material was then abused and transformed into CSAM. So that is why we’re now also focusing on AI in relation to CSAM. So with this, I just want to highlight that now introducing the AI, we have lots of discussions on how this can be potential for harm, unfortunately. However, also some people perceive it as a potential to use AI for detection. And there is this clash between, can AI tools and new emerging technologies be something that can help us tackle this issue, or if it’s gonna make everything way worse. And this is what I want to bring to this debate. Just to give you an idea about how prevalent the CSAM is, I have just a few stats. In 2023, in over 223 countries, 50,000 websites, they were hosted CSAM materials and they were detected and taken down. However, we also have to be mindful that this is just the tip of the iceberg. There are so many things that we just don’t know about and can be in some closed forums, somewhere in the deep web and so forth. So this is just the tip of the iceberg and it is already so alarming. If we look at the specific types of materials such as pictures, videos, these are around 85 millions that has been reported globally in the year 2021. So this is a very alarming number. We have to talk about deepfake technologies and how we can use some, let’s say, advanced editing tools to make it easier for those perpetrators to make it manipulated, to make images and videos suddenly used for CSAM. And this is not just for those who see some form of excitement in these types of materials, but also there is a big money involved because they know that there are people who are willing to buy such materials from these people. So we are now trying to find a balance between how we can still ensure that people are using the new technologies, which actually do have a big potential for helping us tackling all sorts of harmful content, not just CSAM, but also protecting our privacy and protecting the privacy of those most vulnerable ones, in this case, children. Another thing is that right now we are talking a lot about these potential softwares that can detect when CSAM is around. However, this can go also back to grooming, which is the act when the perpetrator is slowly manipulating the child, and this is happening through text usually. But that would mean that some form of software would read the text we are typing somewhere, and that is a big clash between privacy and safety. So here I am also excited to hear your opinions on this issue, where we can find a sweet spot, where we can find a good balance between the right to use these technologies in our advance, use all the opportunities that AI and other emerging technologies can bring us, but also to minimize the risks that are involved with it. Thank you, Nico. Thank you, Natalie. You touched on some interesting things because


Nicolas Fiumarelli: as you mentioned money involved here, there are people that want to buy this kind of content, honest content, also how to ensure this balance, right, because there are people that using technology for the good, right, AI brings a lot of innovation, you know, also for creativity, also for, wow, if you see the advancements nowadays, you see that it’s very useful for a lot of wars, there are a lot about, if you see the discussions on the policy network and artificial intelligence on the show replacements, right, it’s something that is happening worldwide, and the digital divide is increasing because of this, and it’s called the AI divide, as Ananda said, so, but how to protect the privacy as well, because there are solutions, as you mentioned, that Rubin or software to detect, but it’s more on the surveillance part, right, it’s like, how to avoid these kind of practices, or in my opinion, as well, it’s like, some countries have this approach, like of, maybe they can install a software in all mobile phones, or having an agreement with the mobile companies, there are two or three, not so much, the farmers, so maybe it’s easy for them to do this technology, but on the other side, it’s an attack to the privacy, to the human rights, and so it’s difficult to balance on these critical issues, such as violent child pornography, and etc., and it’s not a new thing, right, because with these past years of talking about these issues, and there are like different views, and opposite directions, so thank you for touching on these issues to introduce, so let’s go to another area, that is the policy question number two, how can policymakers, that we have so policymakers here, how these policymakers and regulatory bodies design more flexible regulations. or that can adapt to rapid technological advancement as we were saying, but without compromising on some of the ethical standards and public safety. And here we are also touching on the ethical, right? So we can mention about bias, we can mention about copyright, I don’t know, different issues that are not on the same page as privacy but related to these emerging technologies. So Paola, now is your turn. What are the, in your opinion, the prevailing regulatory approaches to AI governance you have seen? And is there a particular model around all these models that are on the UNESCO framework and other maybe documents out there that stands out as the most conducive to encouraging these innovations? Thank you, Nicolas. Indeed,


Paola Galvez: there are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding. And I would like to explain on five of them, this is not an exhaustive list, it could be more, but we’ve seen risk-based, human rights-based, principles-based, rules-based, and outcomes-based. These are what I’m about to explain to you, you can read more about in the report that is called AI Generative Governance from the WEF. It was published this year. So the risk-based approach is the most common one, we’ve seen it and actually the European Union, Europe adopted it. It really optimizes in terms of regulatory resources because it focuses their efforts on the areas with higher risks and minimize burdens on low-risk areas. Advantages it has, yeah, it allows regulatory frameworks to be flexible and adaptable when our circumstances are changing, but the challenges is risk assessment are complex. At the moment we see that the AI office is developing the guidelines, there is no one model on the risk assessment that should be done, so I’ve seen now in the market developing different ones, but how do we know, how can we be sure that this risk assessment is the correct one, right? Is there one? We’re still in that process. Second is the human rights-based approach, which should be, in my opinion, the best one. Why? Because with this technology we are seeing, and Nathalie mentioned it, you also, Nicolas, technology, this AI, artificial intelligence technologies are reproducing society’s bias, deepening inequalities, plus different other challenges. However, we cannot afford not being tech optimists. AI is the reality, is with us, and it holds tremendous promise. I do believe that when it is used wisely, it can really hold potential to hit SDGs and to help us be more efficient, and for no reason we won’t be replaced, at least from what I’ve seen now. But human rights are at stake, and this human rights-based approach means being grounded on international human rights law, which we already have. The advantage is the scope is not limited. In fact, AI system and the whole cycle of AI system must be under this regulation, should be developed and deployed in a way that respects and upholds human rights. There’s no doubt. What are the challenges of this approach? There is some complexity and opacity on AI system. We all know that these systems are called black boxes, and it also comes with the complexity that human rights protection are sometimes broadly worded, are hard to interpret sometimes, so we need a lawyer. years, specializing in international human rights law. And that’s what is lacking, in my opinion, in the discussion of these AI laws, because we are not really having these people on the table. And there is no example on this. At the moment of hard law, the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law is the one that is putting human rights at the center. But that’s not mandatory. And well, we’ve seen that the adherence process is on the way. So let’s see how it goes. It sets basic principles and global standards on what we want in terms of AI. But it’s not the hard law that is applying in our countries. The principle base is the one that is adopting most of the countries. The US, with the Executive Order on Safe, Secure, and Trustworthy AI. Singapore. What is it? It sets out just fundamental principles, right? Fairness, accountability. And it is intended to foster innovation. So to your question, the principle-based approach could be the one that prevents the stifle of innovation, but protecting human rights in a sense with these principles. Fairness, right? No harm, but it’s not complete. Then rules-based is the Chinese approach, with the China Interim Measures for the Management of Generative AI as an example. It is very rigid, high compliance costs, but it lays out the detailed rules. So it really doesn’t leave much space for interpretation. That’s what is applying in China at the moment. And the outcomes-based is what the Japanese government is applying, because it focuses on achieving measurable AI-related outcomes. It also intends to foster innovation and compliance. But it has limited control over the process, because how can you really measure the outcomes? It can be very vague, right? I would just like to finish by saying that there is a risk to having a Brussels effect in terms of our Latin American countries trying to do what the European Union has done. And it is very important to say that our countries are not Europe. We don’t have the same context. So doing a copy-paste is not the solution. We can take best practices if they are already in place, but it’s very hard to see, really, the result of the implementation of the EU-AI Act, because it is still very new. And as you know, it’s in a phase process to be enforced. So we cannot tell at the moment. And also, I would say, if you guys can have a takeaway from what I’m explaining to you of these five approaches, let’s remember that any regulation that we want to approve in our countries, it must be under a public participation process, a meaningful one, meaning that sitting all the parties in the table and discussing what are their needs and how do they think they can be a solution. The readiness and methodology that I can tell you more later has this public consultation process and brings to the table the opinion of the public and the citizens. Thank you, Nicolás.


Nicolas Fiumarelli: Thank you, Paola, for some detailed analysis on the five different. approaches. I took some notes there, so we will have a good takeaways on that, and also on some noticeable comments you made on each of those. So, on the same line, I will ask now Ananda, what are global examples of flexible AI governance that can inspire policymakers nowadays?


Ananda Gautam: So, I’d reflect on something because our title is over-regulation and balancing the innovation. So, let’s go back to 1970s. If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore. So, regulation is not always the best way of what we call governance, and one of the, like Nicholas asked me what is the best example, maybe UNESCO. So, AI ethics framework is one of the greatest examples, which has been endorsed by, I think, more than 90 countries, 100 now, I think, countries, and then like WECS has been working on second version of their ethical AI concept. So, these are like how principles can bind people rather than legislation. So, I would, if you ask me, is it legislation or policy, I would go for policy-based approach, which would actually harness the power of AI rather than like regulating it and, like we call it over-regulation. So, policy would be something that could actually promote the businesses without undermining the human rights. Like Paula mentioned, the human rights-based approach is the must, and also I reflect on what Nathalie said, like while AI is being developed, there will be both sides, you know, like pros and cons. If we take an example of cybersecurity, scammers are using AI to actually manipulate the people and then like phishing attacks have been into another level with the use of AI. At the same time, cybersecurity tools are being developed, which use AI technologies to detect the patterns of the cybersecurity faster, and then like to, at some times, automated countermeasures are applied right now. There are many tools developed by Palo Alto and other leading cybersecurity companies that employ AI to detect those technologies. So, these are some kind of things. I think it is in the very premature development stage. we have just seen the power of generative AI when like chat GPT exploded and there are so many GPTs available and we have only seen the power of generative AI. One thing is like while people are using AI, we should be very clear in terms of the legislation or policy that how it will be used by public. Like today school children are using AI or like chat GPT or something generative AI to add on their knowledge. Will it give the right knowledge or not that is very crucial. So these kind of considerations are very important which needs to be considered and it is also covered by different frameworks that are being developed but in a according to the national context also we need to have how people will leverage that thing. Like if something is generated by AI, how do people distinguish those things. Maybe we can call it AI literacy. You know people need to know what they are using, what are the consequences, are the things they need to be good enough to distinguish between what is being generated by AI, what is original and I think that are the baselines you know that we need to focus on. I’ll stop it here.


Nicolas Fiumarelli: I like your ideas Ananda but I think that recognizing if something is AI generated nowadays is more complex than everything that you can take off. So also on the complexity of these approaches like because we want to balance flexibility, enforceability, practicality. So it’s like risk-based methods demands nuanced assessments. Human rights-based approaches face challenges with opacity and legal gaps. Principles-based frameworks often lack enforceable mechanisms. So we have problems with each of them. approach is also the outcome-based model emphasize modern measurement, but may struggle, as Paola say, with contextual adaption and particularly in diverse regions, as you say, in Nepal, for example. So together, these approaches highlight, I think, a more multidisciplinary collaboration and tailored strategies, right, to address AI multifaceted risk and opportunities effectively. So going now for the online, James, we would like to see your face. From your experience, how has regulatory flexibility impacted on the African innovations, in your opinion? James, you’re muted.


James Nathan Adjartey Amattey: Okay, sorry. Yeah, so thank you very much, Wood. I think your question is very interesting because we’ll take COVID as an example. COVID, you know, really, I’ll say COVID catalyzed, or should I say highlighted the need for innovation over regulation. So during COVID, there was little regulation. It was all innovation. And together with that innovation, we’re able to control the cost, the spread, and, you know, the, should I say, you know, the lifestyle change that came with COVID, right? So what we want to do is that we do not want a case where it is just emergencies that allow us to be flexible with loss, but we want to adopt a lifestyle of, you know, having that flexibility, but keeping, you know, keeping guard or being on watch, right? So it’s like having a security man. You do not hope. that a thief attacks you, but he’s there for when the thief comes, right? So it’s, it’s just like that. So I like the idea of policies over regulations. So frameworks, you know, constructive ways of doing things that could, you know, guide people on how to do it properly versus inhibiting what they can and what they cannot do. Of course, there are certain times when you can do that, but as we are currently in the experimental phase of innovation, especially with AI, it’s very tantamount that we allow it to spread its wings for us to know what is possible and what is not. In the African context, COVID really allowed, so for example, we had the use of autonomous drones that were delivering COVID shots. They were delivering POPs. They were delivering face masks to remote organizations. We had trackers that way, that were used to identify hotspots of COVID and be able to, you know, design responses for them. And these are several other ways. We’ve also had issues of flooding in Ghana, where most of the work I do, especially in open data, has helped us to use AI to identify roads, to be able to help relieve, get to victims of dam spillage in 2023. And we’ve done a lot of work around public health and correlation of health data using mobile apps. And all of these things have been possible. you know, through innovation. So I think innovation and regulation should be teammates rather than, you know, trying to be competitors of who is right and who is superior. I think we should work and collaborate more. And, you know, innovation should not be an afterthought and regulation should not be an afterthought, but rather we could build, you know, these frameworks into innovation pipelines and into our regulatory pipelines. Thank you very much.


Nicolas Fiumarelli: Thank you so much, James. Due to the time constraints we are reaching to the end of our session, I will do a condensed question for all the panelists. So you have one minute each for answering. How can we successful, how can successful examples of AI applications, and international frameworks we were talking, could inform regulatory strategy that balance strategic innovation, safeguarding employment, for example, as an issue addressing societal impacts such as job displacements and critical needs like the healthcare, industrial automation. So may we start with Paola, growing from your experience leading the UNESCO AI run? Yes.


Paola Galvez: So the question was very long. So I’ll do a wrap up, just answering. First, think about local needs. What are the regulations that we have in place? And how can we complement them? Sometimes, and I think this is a personal opinion, we need an umbrella regulation, like the AI act and AI, not guidelines, but it must be mandatory. Why is it? Because the country needs to have a position. What the country wants the AI to do and to be for their citizens. What’s the position of the country in terms of legal autonomous weapons? I think that should be a yes or no, right? So that could be mandatory, and that’s a prohibition. vision or not. Surveillance, right? Are we using AI to safety and security? But let’s be mindful of that it can target people from migration or other communities that are vulnerable or minorities. So it’s very important when we’re using that. And that is taking a position. That means regulation. That’s law. In some other position, please, let’s invest in capacity development. Digital skills is key in terms of using AI because we will never be able as a country to leverage all the potential of this technology if we don’t help our citizens understand it or use it as it best. Thank you. This is very condensed, but happy to speak later.


Nicolas Fiumarelli: Thank you so much, Paola. Maybe Natalie, if you want to make one minute contribution on the health care part that you are the expert, please.


Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case. We sometimes have discussions about, OK, what we should do in health care. But this is such a broad concept. And let’s say that ethical considerations such as patients privacy or the data protection of patients and minimizing the bias in algorithms when it comes to treatment and health care are really, from my perspective, non-negotiables. And definitely, we have to take these into account when we talk about health care. However, when we talk about, let’s say, diagnostic tools that can assist the doctors with some critical conditions, well, this carries way higher risks and higher risk stakes than, for instance, AI tools or other technologies used for administrative scheduling system. For instance, how we set a timeline for certain operations and stuff. So again, it is so broad. And we have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those. high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements and then we can really grow and focus more on the innovation and make these things faster and more effective. So again it’s about balance and I don’t want to dive into more details but I’m of course happy to talk about it more because we recently conducted a robust research in central Europe about the AI usage in the healthcare we were also asking people whether they use it for their own questions for instance if they ask CGPT about oh I have this issue this is bothering me and if they trust what the AI is telling them because one thing is the usage and people can be just experimenting with the thing and you know just overall excited about these opportunities but they are mindful that sometimes what it is recommended to them is not the best. So we have some very I would say interesting insights and I’m happy to talk more about this also over


Nicolas Fiumarelli: coffee. Thank you. Thank you Nathalie. So thank you so much Nathalie. So yes we have only one hour session so you can reach Nathalie at the coffee and continue that conversation. Going for Osei do we have any question online and maybe we have one question for the on-site the first raise the hand you can make it okay. Okay so it’s not really a question this is a


Osei Manu Kagyah: suggestion from someone he talked about human rights being at the core of the conversation as said by the UN but I will have a question for all of us to mull over. I think the initiation or say conception of these policy questions it starts from how that lack of trust between multi-stakeholder the various multi-stakeholders and a good example is argument a this is an argument a I won’t ask debate about this the UK secretary of state for science and technology Peter Kyle argues that tech companies or say companies, should be treated like states because their investments in innovation exceed that of governments. Argument B was some few weeks back where a former Dutch member of the European Parliament argued that focus should be to strengthen the primacy of democratic governance and oversight and not to show humility. She argued further to highlight the need for self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system and are not overtaking it. Obviously, I think argument B sounds persuasive, but then how do we ensure stakeholders that do have a say in there? So the conception or the initiation of all these policy conversations is the lack of trust I have noticed. But if you have any question on-site, please do raise your hand. Please be snappy because we have privacy as I speak.


Audience: Yes, thank you so much. My name is Agustina, I’m from Argentina, and I have a quick question. So I see the… Sorry. So when regulating AI or technology, what I found was like we have like different layers or aspects. So for instance, we have the users, we have the developers, we have regarding AI, the training of this AI. So in this sense, I see that users are already, if you want to say like in the physical world, punished by the law. But on the other side, like do you think that, for instance, companies should be responsible for what they develop or those who train the models should also be responsible for the effects that this has or not?


Nicolas Fiumarelli: Maybe some of the panelists want to… answer the question, and we go to the last question here, right? Okay. Who wants to answer the question? Or we go directly to the next question.


Audience: Thank you, Moderator. Mine is on the topic of this discussion. Have we really reached the stage of overregulation, given that with the advent of ICTs and now with the coming in of emerging technologies, we have seen regulation playing catch up. It’s usually ahead of regulation. So have we reached the stage of overregulation yet? Do you have a question as well? That will be the last panelist.


James Nathan Adjartey Amattey: Yes, I think I can answer this a bit. Okay, James, you can answer and then wrap up. Yes. So yes, there is a risk of innovation going ahead of regulation, but it all boils down to AI literacy. And we totally come into an understanding of what AI really is and what AI is not, right? Because for example, if you ask a lot of people, what is AI? Most of them will most of them will answer chat GPT, but chat GPT is just one use case of AI. It’s not AI enough in it of itself, right? So we need to be able to build literacy programs for regulators, for developers, for users, for us to be able to have an understanding of what are intersecting interests are. And then we can be able to then look at those intersective interests and then be able to now tailor AI solutions to our personal use cases. So most of my work for next year will literally fall around AI literacy and building literacy programs to help understand, help, should I say, proliferate that knowledge of what AI truly is and what AI is not, what it can do, what it should do, what it should be allowed to do. And then we can take the rest from there. Thank you very much. Happy to connect online. If, yes, my name is James. You can find me on LinkedIn.


Nicolas Fiumarelli: OK, thank you for your time, James, and your valuable contribution. Thank you, everyone, for the engaging discussion. Sorry for the ones that were in the queue. We are six minutes out of time. Today, we explored these critical aspects of AI regulation and innovation, drawing some insight from diverse regions and sectors, as you have seen. So a special thanks to our panelists here and their valuable contribution and also our audience for your active participation. Thank you, and enjoy the rest of the AGF. Thank you, online audience. We might take a photo on the front, if you want. Come, everybody. I think we’re good. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. Great. OK. Great. Great. Great. Great. OK. OK. OK. OK. OK. OK. OK. OK. There. you you you you


P

Paola Galvez

Speech speed

144 words per minute

Speech length

1840 words

Speech time

764 seconds

Regulation is necessary but should not stifle innovation

Explanation

Paola Galvez argues that regulation is needed, but the focus should be on how to regulate rather than whether to regulate. She emphasizes the importance of balancing regulatory safeguards with fostering innovation.


Evidence

Paola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, and outcomes-based.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Ananda Gautam


Differed on

Approach to AI regulation


Human rights-based approaches are crucial but face implementation challenges

Explanation

Galvez emphasizes the importance of human rights-based approaches in AI regulation. However, she notes that these approaches face challenges due to the complexity and opacity of AI systems, as well as the broad wording of human rights protections.


Evidence

She mentions the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law as an example of a human rights-based approach.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Copy-pasting EU regulations is not appropriate for developing countries

Explanation

Galvez warns against the ‘Brussels effect’ where Latin American countries might try to copy EU regulations. She emphasizes that context matters and that developing countries have different needs and circumstances compared to Europe.


Major Discussion Point

Developing Context-Appropriate AI Governance


Public participation is crucial in developing AI policies

Explanation

Galvez stresses the importance of involving all stakeholders in the development of AI policies. She argues for a meaningful public participation process that includes all parties in discussions about their needs and potential solutions.


Evidence

She mentions the UNESCO AI Readiness Assessment Methodology as an example of a process that includes public consultation.


Major Discussion Point

Developing Context-Appropriate AI Governance


Digital skills development is key to leveraging AI’s potential

Explanation

Galvez emphasizes the importance of investing in capacity development and digital skills. She argues that countries cannot leverage the full potential of AI technology without helping citizens understand and use it effectively.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


Local needs and existing regulations should inform AI governance

Explanation

Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance frameworks. She argues for complementing existing regulations rather than creating entirely new ones.


Evidence

She mentions the need for an umbrella regulation that defines a country’s position on key AI issues like autonomous weapons and surveillance.


Major Discussion Point

Developing Context-Appropriate AI Governance


A

Ananda Gautam

Speech speed

139 words per minute

Speech length

873 words

Speech time

374 seconds

Flexible, principle-based approaches can foster innovation while protecting rights

Explanation

Gautam advocates for policy-based approaches over strict legislation. He argues that this approach can promote business without undermining human rights and allows for more flexibility in governance.


Evidence

He cites the UNESCO AI ethics framework and the WECS ethical AI concept as examples of principle-based approaches.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Paola Galvez


Differed on

Approach to AI regulation


AI divide between countries needs to be addressed

Explanation

Gautam highlights the growing AI divide between countries, particularly affecting developing nations. He emphasizes the need to build capacity in these nations to leverage AI’s power effectively.


Major Discussion Point

Developing Context-Appropriate AI Governance


Capacity building in developing nations is crucial

Explanation

Gautam stresses the importance of building capacity in developing nations to engage in AI governance processes. He argues that this is necessary for these countries to develop comprehensive AI policies that leverage AI’s power effectively.


Major Discussion Point

Building AI Capacity and Literacy


Public understanding of AI-generated content is important

Explanation

Gautam emphasizes the need for public understanding of AI-generated content. He argues that people need to be able to distinguish between AI-generated and original content, which he refers to as AI literacy.


Evidence

He mentions the use of AI by school children and the need for them to understand if they are getting the right knowledge.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


J

James Nathan Adjartey Amattey

Speech speed

129 words per minute

Speech length

1561 words

Speech time

723 seconds

COVID-19 highlighted need for innovation over rigid regulation

Explanation

Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. He argues for maintaining this flexibility in normal times, balancing innovation with necessary safeguards.


Evidence

He cites examples of autonomous drones delivering COVID supplies and AI-powered trackers identifying hotspots during the pandemic.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Balancing regulation and innovation


AI literacy is needed to understand risks and benefits

Explanation

Amattey emphasizes the importance of AI literacy for regulators, developers, and users. He argues that understanding what AI is and isn’t is crucial for tailoring AI solutions to specific use cases and addressing intersecting interests.


Evidence

He mentions his future work will focus on building AI literacy programs.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Importance of AI literacy and capacity building


N

Natalie Tercova

Speech speed

159 words per minute

Speech length

1312 words

Speech time

492 seconds

AI can be used to both create and detect child sexual abuse material

Explanation

Tercova discusses the dual role of AI in relation to child sexual abuse material (CSAM). She highlights how AI can be used to create deepfake CSAM, but also how it can be used to detect and combat such material.


Evidence

She cites statistics on the prevalence of CSAM, mentioning 50,000 websites hosting CSAM in 2023 and 85 million reported materials in 2021.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Risk-based regulation allows focus on high-risk AI applications

Explanation

Tercova advocates for a risk-based approach to AI regulation in healthcare. She argues that high-risk applications should undergo more rigorous review, while low-risk innovations can proceed under lighter regulatory requirements.


Evidence

She contrasts high-risk diagnostic tools with lower-risk administrative scheduling systems in healthcare.


Major Discussion Point

Balancing AI Regulation and Innovation


Patient privacy and data protection are non-negotiable in healthcare AI

Explanation

Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects of AI use in healthcare. She argues that these ethical considerations must be taken into account regardless of the specific application.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Agreements

Agreement Points

Balancing regulation and innovation

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


COVID-19 highlighted need for innovation over rigid regulation


summary

The speakers agree that while regulation is necessary, it should be flexible enough to allow for innovation. They advocate for approaches that balance safeguards with the ability to innovate.


Importance of AI literacy and capacity building

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Digital skills development is key to leveraging AI’s potential


Public understanding of AI-generated content is important


AI literacy is needed to understand risks and benefits


summary

The speakers emphasize the crucial role of AI literacy and capacity building in enabling effective use and governance of AI technologies.


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Unexpected Consensus

Risk-based approach to AI regulation

speakers

Paola Galvez


Natalie Tercova


arguments

Regulation is necessary but should not stifle innovation


Risk-based regulation allows focus on high-risk AI applications


explanation

Despite coming from different backgrounds (policy and healthcare), both speakers advocate for a risk-based approach to AI regulation, suggesting a broader consensus on this strategy across sectors.


Overall Assessment

Summary

The main areas of agreement include the need for balanced regulation that doesn’t stifle innovation, the importance of AI literacy and capacity building, and the necessity of considering local contexts in AI governance.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This suggests a growing recognition of common challenges and potential solutions in AI governance across different sectors and regions. However, the specific implementation details and priorities may still vary, indicating the need for continued dialogue and collaboration.


Differences

Different Viewpoints

Approach to AI regulation

speakers

Paola Galvez


Ananda Gautam


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


summary

While both speakers emphasize the importance of balancing regulation and innovation, Galvez argues for a more structured regulatory approach, while Gautam advocates for a more flexible, principle-based approach.


Unexpected Differences

Role of COVID-19 in shaping AI regulation

speakers

James Nathan Adjartey Amattey


Paola Galvez


arguments

COVID-19 highlighted need for innovation over rigid regulation


Copy-pasting EU regulations is not appropriate for developing countries


explanation

While not directly contradictory, these arguments present an unexpected difference in perspective on how crises and external influences should shape AI regulation in developing countries.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation, the balance between innovation and safeguards, and the consideration of local contexts in developing AI governance frameworks.


difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific approaches to AI regulation, there is a general consensus on the need for balanced governance that protects rights while fostering innovation. These differences highlight the complexity of developing effective AI governance frameworks that can address diverse global needs and contexts.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of human rights and privacy in AI regulation, but they differ in their focus. Galvez discusses broader human rights challenges, while Tercova emphasizes specific healthcare-related privacy concerns.

speakers

Paola Galvez


Natalie Tercova


arguments

Human rights-based approaches are crucial but face implementation challenges


Patient privacy and data protection are non-negotiable in healthcare AI


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Takeaways

Key Takeaways

AI regulation is necessary but should be flexible to avoid stifling innovation


A human rights-based approach to AI governance is crucial but faces implementation challenges


Context-appropriate AI governance is needed, especially for developing countries


Building AI literacy and capacity across stakeholders is essential


Risk-based approaches can help focus regulation on high-risk AI applications while allowing innovation in lower-risk areas


Public participation and multi-stakeholder collaboration are important in developing AI policies


Resolutions and Action Items

Develop AI literacy programs for regulators, developers, and users


Invest in digital skills development to leverage AI’s potential


Consider local needs and existing regulations when developing AI governance frameworks


Implement rigorous review processes for high-risk AI applications in healthcare


Unresolved Issues

How to effectively balance privacy and safety in AI-powered content moderation


The extent of responsibility for AI developers and companies for the effects of their technologies


Whether the current state of AI regulation constitutes over-regulation or under-regulation


How to address the growing AI divide between developed and developing countries


Suggested Compromises

Adopt principle-based frameworks to provide guidance without rigid rules


Implement lighter regulatory requirements for low-risk AI innovations


Use policies and guidelines instead of strict legislation where possible


Balance innovation and regulation by treating them as collaborative rather than competitive forces


Thought Provoking Comments

I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right?

speaker

Paola Galvez


reason

This comment shifts the framing of the discussion from debating regulation itself to focusing on implementation approaches. It challenges the premise of the session title and sets a new direction.


impact

This reframing influenced subsequent speakers to focus more on specific regulatory approaches and implementation challenges rather than debating regulation in principle.


There are certain regulatory frameworks that need to happen for innovation to come to the forefront.

speaker

James Nathan Adjartey Amattey


reason

This insight highlights the complex relationship between regulation and innovation, suggesting they can be complementary rather than opposed.


impact

It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debate.


We have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements

speaker

Natalie Tercova


reason

This comment introduces a nuanced, risk-based approach to regulation that balances innovation and safety concerns.


impact

It shifted the conversation towards more granular considerations of how to tailor regulatory approaches to different AI applications and risk levels.


There are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding.

speaker

Paola Galvez


reason

This insight highlights the complexity of AI governance and the potential for hybrid regulatory approaches.


impact

It led to a more detailed discussion of various regulatory models (risk-based, human rights-based, principles-based, etc.) and their respective strengths and weaknesses.


If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore.

speaker

Ananda Gautam


reason

This historical perspective provides a cautionary tale about over-regulation stifling innovation.


impact

It prompted reflection on balancing regulation with allowing space for technological development and innovation.


Overall Assessment

These key comments shaped the discussion by moving it from a binary debate about regulation vs. non-regulation to a more nuanced exploration of different regulatory approaches, their impacts on innovation, and the need to balance multiple concerns including safety, ethics, and technological progress. The discussion evolved to consider risk-based frameworks, the role of soft law and principles, and the importance of context-specific approaches tailored to different applications and regions. There was a general consensus that some form of governance is necessary, but disagreement on the exact form it should take and how to implement it effectively without stifling innovation.


Follow-up Questions

How can we find a balance between privacy and safety when using AI to detect child sexual abuse material (CSAM)?

speaker

Natalie Tercova


explanation

This is a crucial issue as it involves the tension between protecting children and maintaining individual privacy rights.


How can we ensure that risk assessments for AI systems are accurate and reliable?

speaker

Paola Galvez


explanation

This is important for implementing effective risk-based regulatory approaches to AI governance.


How can we improve AI literacy among policymakers, developers, and users?

speaker

James Nathan Adjartey Amattey


explanation

This is crucial for informed decision-making and effective regulation of AI technologies.


How can we design regulatory frameworks that are flexible enough to adapt to rapid technological advancements?

speaker

Nicolas Fiumarelli


explanation

This is important to ensure regulations remain relevant and effective as AI technology evolves.


How can we balance the need for innovation with the protection of human rights in AI development and deployment?

speaker

Paola Galvez and Ananda Gautam


explanation

This is critical for ensuring AI benefits society while minimizing potential harms.


How can developing nations build comprehensive AI policies that leverage the power of AI while addressing their specific challenges?

speaker

Ananda Gautam


explanation

This is important for ensuring equitable global development of AI technologies and policies.


How can we distinguish between AI-generated and human-generated content, and what are the implications for AI literacy?

speaker

Ananda Gautam


explanation

This is crucial for addressing potential misuse of AI and ensuring informed consumption of information.


How can we design AI regulations that take into account local needs and existing regulatory frameworks?

speaker

Paola Galvez


explanation

This is important for creating effective and context-appropriate AI governance.


How should responsibility be allocated among AI developers, companies, and users for the effects of AI systems?

speaker

Audience member (Agustina from Argentina)


explanation

This is crucial for establishing accountability in AI development and use.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.