Open Forum #35 Advancing Online Safety Role Standards

19 Dec 2024 08:15h - 09:15h

Open Forum #35 Advancing Online Safety Role Standards

Session at a Glance

Summary

This discussion focused on applying human rights standards to online spaces and emerging technologies, particularly artificial intelligence (AI). Experts from various Council of Europe bodies discussed how existing conventions and recommendations address online safety, violence against women and children, and discrimination risks in AI.

The panelists emphasized that human rights apply equally online and offline, but acknowledged challenges in implementation. They highlighted the importance of both legal and non-legal measures, including education, awareness-raising, and multi-stakeholder cooperation. The Lanzarote Convention on protecting children from sexual exploitation and the Istanbul Convention on violence against women were cited as key frameworks that have been adapted to address online dimensions.

Regarding AI, the discussion explored both risks and opportunities. Concerns were raised about AI potentially amplifying existing biases and creating new forms of discrimination. However, panelists also noted AI’s potential to identify patterns of discrimination and improve safeguards. The need for transparent, auditable AI systems and updated non-discrimination laws was stressed.

The experts called for greater collaboration between governments, civil society, and tech companies to ensure online platforms uphold rights. They emphasized the importance of political prioritization and moving from rhetoric to action in addressing online harms. The discussion concluded that innovation and human rights protection are not mutually exclusive, but require clear standards, commitment, and cooperation across sectors.

Keypoints

Major discussion points:

– Applying human rights standards to the online/digital space

– Challenges and opportunities of AI for human rights, especially regarding discrimination and vulnerable groups

– Need for comprehensive legal and non-legal approaches to protect rights online

– Importance of multi-stakeholder collaboration between governments, civil society, and tech companies

– Balancing innovation with human rights protections in developing new technologies

Overall purpose:

The goal was to explore how established human rights standards can be understood and applied in the online space and with new digital technologies, with a focus on protecting vulnerable groups like women and children.

Tone:

The tone was primarily informative and analytical, with speakers providing overviews of relevant conventions, recommendations, and challenges. There was an underlying sense of urgency about the need to take action, but the tone remained measured and solution-oriented throughout. Towards the end, some speakers emphasized the need to move from rhetoric to concrete action in a slightly more forceful tone.

Speakers

– Menno Ettema: Moderator, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence

– Octavian Sofransky, Council of Europe, Digital Governance Advisor Camille Gangloff, Council of Europe, Gender Equality policies

– Naomi Trewinnard: Council of Europe, Sexual violence against children (Lanzarote Convention)

– Clare McGlynn: Professor at Durham Law School, Expert on violence against women & girls 

– Ivana Bartoletti: Member of the Committee of Experts on AI, Equality and Non-Discrimination of the Council of Europe, Vice President and Global Chief Privacy and AI Governance Officer at Wipro

– Charlotte Gilmartin: Council of Europe, Steering Committee on Anti-Discrimination, Diversity and Inclusion (CDADI)

Additional speakers:

– Clara McLaren: Professor at the Dunham Law School, an expert on violence against women and girls

Full session report

Expanded Summary of Discussion on Human Rights in the Digital Age

Introduction

This discussion, moderated by Menno Ettema of the Council of Europe’s Anti-Discrimination Department, explored the application of human rights standards to online spaces and emerging technologies, with a particular focus on artificial intelligence (AI). Experts from various Council of Europe bodies examined how existing conventions and recommendations address online safety, violence against women and children, and discrimination risks in AI.

Key Themes and Arguments

1. Applying Human Rights Standards Online

The panellists unanimously agreed that human rights apply equally online and offline. Menno Ettema framed the central question of the discussion: “How can well-established human rights standards be understood for the online space and in new digital technology?” This set the agenda for exploring specific ways in which existing frameworks are being adapted to digital contexts.

Octavian Sofransky presented the Council of Europe’s digital agenda, emphasizing the organization’s commitment to protecting human rights in the digital environment. A Mentimeter poll conducted during the discussion showed that participants felt some or all human rights are more difficult to apply online, underscoring the complexity of the issue.

Naomi Trewinnard emphasised the importance of the Lanzarote Convention in setting standards to protect children from sexual exploitation online. She also mentioned a background paper prepared for the Lanzarote Committee on emerging technologies. Similarly, Clare McGlynn discussed how the Istanbul Convention, adopted in 2011, addresses the digital dimension of violence against women, with a General Recommendation on this topic adopted in 2021. These examples illustrated how existing legal frameworks are being adapted to address online harms.

2. Artificial Intelligence and Human Rights

The discussion explored both the risks and opportunities presented by AI in relation to human rights. Ivana Bartoletti provided a critical perspective, stating, “AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society.” She also raised concerns about new forms of algorithmic discrimination created by AI that may not be covered by existing laws.

Naomi Trewinnard noted that AI is being used to facilitate sexual abuse of children online, highlighting the urgent need for updated protections. However, Bartoletti also emphasised AI’s potential for positive impact, stating, “We can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so.” This balanced view led to a discussion of specific ways AI could be used to promote equality and human rights, given proper guidance and political commitment.

Octavian Sofransky highlighted the Council of Europe’s work on the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, demonstrating the organization’s proactive approach to addressing AI-related challenges.

3. Collaboration to Protect Rights Online

A recurring theme throughout the discussion was the need for multi-stakeholder collaboration to effectively address online human rights issues. Naomi Trewinnard highlighted the importance of cooperation with the private sector to obtain electronic evidence in cases of online child exploitation. She also emphasised the critical need for global collaboration and mentioned the annual Awareness Raising Day about sexual abuse of children on November 18th.

Ivana Bartoletti suggested the use of regulatory sandboxes to allow governments, companies, and civil society to work together on AI governance. She also discussed the EU’s Digital Services Act (DSA) as an example of regulatory efforts in this space. Clare McGlynn called for greater political prioritisation and action from tech platforms to address online harms.

4. Balancing Innovation and Human Rights Protection

The experts grappled with the challenge of balancing technological innovation with human rights protections. While recognising the potential benefits of AI and other emerging technologies, they stressed the need for transparent, auditable systems and updated non-discrimination laws.

Clare McGlynn emphasised the societal and cultural dimensions of online violence, stating, “If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys.” This comment broadened the scope of the discussion to include education and awareness-raising as key strategies alongside legal and technological approaches.

5. Defamation Laws and Human Rights Defenders

In response to a question from audience member Jaica Charles, Menno Ettema addressed the issue of defamation laws being misused against human rights defenders online. He highlighted the Council of Europe’s work on Strategic Lawsuits Against Public Participation (SLAPPs) and emphasized the need to protect freedom of expression while combating online hate speech, referencing CM Recommendation 2022-16 on combating hate speech.

Conclusions and Unresolved Issues

The discussion concluded that innovation and human rights protection are not mutually exclusive but require clear standards, commitment, and cooperation across sectors. The experts called for a move from rhetoric to concrete action in addressing online harms and protecting human rights in digital spaces.

Several unresolved issues emerged, including:

1. How to effectively balance innovation with human rights protection in AI development

2. Addressing new forms of algorithmic discrimination not covered by existing laws

3. Ensuring transparency and auditability of AI systems used by private companies

4. Protecting human rights defenders from misuse of defamation laws to silence them online

The discussion highlighted the need for ongoing dialogue, research, and collaboration to address these complex challenges at the intersection of human rights, technology, and governance. The use of Mentimeter questions throughout the session encouraged active participation and provided valuable insights into audience perspectives on these critical issues.

Session Transcript

Menno Ettema: Hi Imano, she’s just joining now. Perfect, good. Then I will slowly kick off. You should be hearing on channel one. Can you hear me now Imano? No? Yes, great. Okay, good. Good morning everyone. Good afternoon for those in other parts of the world or good evening. Good night. We are here at an open forum for one hour, a short timeline to discuss quite a challenging topic, which is to advance online safety and human rights standards in that space. I will shortly introduce myself first. I’m Imano Etema. I work for the Council of Europe in the Anti-Discrimination Department, working on hate speech, hate crime and artificial intelligence. And I’m joined by quite an extended list of speakers and guests. I’m joined here by Clara McLaren. And I’m sure that the technician is precisely Clara McLaren. She is a professor at the Dunham Law School, expert on violence against women and girls online. We are also joined here in the room by Ivana Bartoletti, member of the Committee of Experts on AI, Equality and Non-Discrimination of the Council of Europe, and also Vice President and Global Chief Privacy and AI Governance Officer at Wipro. Also with us is Naomi Trevannert, Council of Europe, as well as working on sexual violence against children, the Lanzarote Convention. As online moderator, we have with us Charlotte Gilmartin, who works also in the anti-discrimination department and is secretary to the expert committee on AI, non-discrimination, and equality. And Octavian Sofdrasky, digital governance advisor, also at the Council of Europe. The session is about human rights standards and if they also apply online, question mark. And I think it’s important to acknowledge that the UN and regional institutions like the Council of Europe, but also the African Union and others have developed robust human rights standards for all its member states. And that also includes other key stakeholders, including business and civil society. The UN and the Council of Europe has clearly stated human rights apply equally online as it does offline. But how can well-established human rights standards be understood for the online space and in new digital technology? So that’s the question of today. I would like to give the floor first to Octavian, who will provide us a little bit of information about the Council of Europe’s digital agenda. So just to set the frame for our institution, and then we will broaden the discussion from there or actually narrow it into really working on the anti-discrimination field. Octavian, the floor is yours.

Octavian Sofransky: Ladies and gentlemen, dear colleagues, I’m greeting you from Strasbourg. The Council of Europe, the organizer of this session, remains unwavering in its commitment to protecting human rights, democracy, and the rule of law in the digital environment. This dedication was reaffirmed by the Council of Europe’s Secretary General during the European Dialogue on Internet Governance in Vilnius last June. The Secretary General emphasized that the digital dimension of freedom is a priority for the Council of Europe. Europe. Our organization has always recognized the importance of balancing innovation and regulation in the realm of new technologies. In reality, these elements should not be viewed as opposing forces but as complementary partners ensuring that technological advancements genuinely benefit our societies. A Council of Europe Committee of Ministers declaration on the WSIS plus 20 review was issued this September advocating for a people-centered approach to internet development and the multi-stakeholder model of internet governance and supporting the extension of the IGF mandate for the next decade. Moreover, we are proud to announce the adoption of the pioneering Framework Convention on Artificial Intelligence and Protecting Human Rights, Democracy and the Rule of Law last May. This landmark convention, which was opened for signature at the Conference of Ministers of Justice in Vilnius on 5 September very recently, is the first legally binding international instrument in this field and has been already signed by 11 states around the world. Sectoral instruments will complement this convention, including possibly on online safety, our today’s section topic. As a long-time supporter of the IGF process, the Council of Europe has prepared several sessions for this real edition of the IGF, including on privacy, artificial intelligence and indeed the current session on online safety, a topic that remains a top priority for all European states and their citizens. Thank you.

Menno Ettema: Over to you, Menno. Thank you, Octavian, for elaborating the Council of Europe’s work and the reason for this session and a few others. Can I ask all the speakers that are joining us online to switch on their cameras, because it makes it a little bit more lively for us here in the room, but also online that are joining. Thank you very much. I would like to thank you for this, Octavian, and I would like I would like to go over to Naomi, because the Lanzarote Convention on sexual violence against children has a long-term experience with the topic, it’s a very strong standard, but recently published a new document on the digital dimension of sexual violence against children. Naomi, I give the floor to you to introduce the convention and the work that it does.

Naomi Trewinnard: Thank you, Meno. Good morning, good afternoon, everybody. I’m very pleased to be joining with you today. I’m a legal advisor at the Lanzarote Committee Secretariat, and that’s the committee of the parties to the Convention for the Protection of Children Against Sexual Exploitation and Sexual Abuse. So, as Meno mentioned, this is a really comprehensive international treaty that is open to states worldwide, and it aims to prevent and protect children against sexual abuse and to prosecute those who offend. So I wanted to just briefly present some of the standards that are set out in this convention. So firstly, to do with prevention, it requires states to screen and train professionals, ensure that children receive education about the risks of sexual abuse and how they can access support if they’re a victim, as well as general awareness raising for all members of the community and also preventive intervention programmes. When it comes to protection, really, we’re trying to encourage professionals and the general public to report cases of suspected sexual abuse and also to provide assistance and support to victims. And this is including through setting up helplines for children. When it comes to prosecution, it’s really essential to ensure that perpetrators are brought to justice. And this comes through criminalising all forms of sexual exploitation and sexual abuse, including those that are committed online, for example, solicitation or grooming. child, offences related to child sexual abuse materials, so also called child pornography, and also witnessing or participating in sexual acts over a webcam. The Convention also sets out standards to ensure that investigations and criminal proceedings are child-friendly, so there the aim is really to avoid re-victimising or re-traumatising the child victim, and also to obtain best evidence and uphold the rights of the defence. So in this respect the Lanzarote Committee has recognised the Children’s House or Barnahus model as a promising practice to ensure that we obtain good evidence, perpetrators are brought to justice and we avoid victimising children. So these standards and safeguards apply equally to abuse that is committed online and also contact abuse that is committed offline. The Treaty really emphasises the importance of multi-stakeholder coordination in the context of combating online violence, and this Convention really specifically makes a reference to the information and communication technology sector, and also tourism and travel and banking and finance sectors, really trying to encourage states to coordinate with all of these private actors in order to better protect children. The Lanzarote Committee has adopted a number of different opinions, declarations and recommendations to clarify the ways in which this Convention can contribute to better protect children in the online environment. For example by confirming that states should criminalise the solicitation of children for sexual offences even without an in-person meeting, so when this is in order to obtain sexual offences online, and also given the dematerialised nature of these offences multiple jurisdictions will often be involved in a specific case. We might have the victim situated in one country, electronic evidence being stored on a server in a different country, and the perpetrator sitting in another country. committing this abuse over the internet. Therefore, the committee really recognises and emphasises the importance of international cooperation, including through international bodies and international meetings such as this one. The convention is also really clear that children shouldn’t be prosecuted for generating images or videos themselves. We know that many children are tricked or coerced or blackmailed into this or, you know, generate an image and thinking it’s going to be used for a specific purpose within a conceptual relationship. And then it gets out of hand. So the committee’s really emphasised that we should be protecting our children, not criminalising or prosecuting them. In terms of education and awareness raising, the committee really emphasises that we need to ensure that children of all ages receive information about children’s rights. And also that states are establishing helplines and hotlines, like reporting portals, so that children have a place, a safe place to go to get help if they’re becoming a victim. And in that context, it’s also really essential to train persons working with children about these issues so that they can recognise signs of abuse and know how to help children if they’re a victim. So I’ve put some links to our materials on the slides and I’ll hand back to Menno now. Thank you for your attention.

Menno Ettema: Thank you very much. Thank you very much, Naomi. It is quite elaborate work to be done. But what I think the convention really outlines is that it’s legal and non-legal measures. And it’s the comprehensive approach and the multistakeholder approach that’s really important in addressing sexual exploitation of children or violence against children. In that line of thought, I want to also give the floor to Claire, who can speak on the work of the Istanbul Convention, around the Istanbul Convention, and particularly because it published a relatively new policy recommendation, number one, on the digital dimension of violence against women, which I think is a very important document to share here today.

Clare McGlynn: Yes, good morning, everybody. And thank you very much. So I’m Claire McGlynn. I’m a professor of law at Durham University in the UK. And I’m also a member of the Council of Europe’s expert committee on technology facilitated violence against women and girls. So I’m going to briefly talk today about the Istanbul Convention that’s just been referred to, which was adopted first in 2011. And there’s four key pillars to make this a comprehensive piece of law. It talks about prevention, protection, prosecution, and integrated policies. Now, the key theme of the Istanbul Convention is that violence against women and girls must be understood as gendered. Violence against women and girls is perpetrated mainly by men. It’s also experienced because women and girls are women and girls. Now the monitoring of that convention is done by the body called GRIVIO. That’s the independent expert body which undertakes evaluations of state compliance, as well as preparing various thematic reports. And as already mentioned, in 2021, GRIVIO adopted our general recommendation on the digital dimension of violence against women and girls. So this general recommendation offers an interpretation of the Istanbul Convention, in light of the prevalence and growing concern and harms around online and technology facilitated violence against women and girls. It provides many detailed explanations as to how the convention can be interpreted and adopted in light of the prevalence of online abuse, including things like reviewing relevant legislation in accordance with whether the digital dimension of violence against women and girls is particularly acute. We see this particularly in the area of domestic violence, where some legislation does not account for the fact that in reality today, most forms of domestic abuse involve some element of technology and online elements. It also talks about incentivizing internet intermediaries to ensure content moderation. The point here is about how women’s human rights are being inhibited and affected by online abuse. And regulation, such as content moderation, is necessary to protect those rights. In other words, regulation frees women’s speech online by ensuring we are more free and able to talk and speak online rather than self-censoring in the light of online abuse. It also talks, for example, about the importance of undertaking initiatives to eradicate gender stereotypes and discrimination, especially amongst men and boys. If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys. Thank you very much.

Menno Ettema: Thank you very much, Claire. I really like the general recommendation because of how it portrays the offline forms of violence against women and harassment and all the different ways and shapes it forms and how that is actually also mirrored in the online space. So it’s really a very clear explanation of how one and the other are the same, the online and the offline, even though we call it maybe different or it might be slightly differently presented because of the online context. But the dynamics are very similar. Thank you. Content moderation is an important part here as well. And working again with stereotypes and attitudes is a challenge. So it’s, again, legal, but also the non-legal approaches are very important. Thank you very much. Ivana, can I give the floor to you? Because one new area is, of course, AI. Octavian already mentioned it, the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, just adopted. From, can you just give two short words and how these human rights standards apply in the AI field? And then we’ll give the floor to the rest of the audience. And then we’ll come back a little bit more on the discrimination risks when it comes to AI, including gender equality.

Ivana Bartoletti: Thank you so much. So AI, of course, is one of the most talked things at the moment at the IJF here. We’ve been talking about AI a lot. And what is the impact is on AI on existing human rights and civil liberties? So obviously, artificial intelligence has been capable of doing so many excellent and good things over recent years. Can you hear me? There’s been a big push over recent, especially with the right gender. And talking about this, which is the idea we’re used to, which is that, oh, it’s breaking up. Can you hear me? Is that OK? OK. So I’m talking about generative AI, which is the idea we’ve seen that can generate images, that can generate, that is another area of discussion. Now, AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society. That’s crystallizing them into either representation. So what you were mentioning earlier, you were saying, how do we change these stereotypes beyond the legal side? Well, there is an issue here, because the use of big data and machine learning, it can amplify the existing stereotypes. crystallizing, crystallize them. And on the other hand, it does provide, for example, with very easy and lower the bar to access to tools, such as, for example, generative AI tools that can generate deep fake images. And whether this is in the space of fake information, whether this is in the space of assigning civil women to depict pornography, what we are seeing is, again, the lowering the bar of access to these tools can have a detrimental impact on on on women, especially. But if you think about privacy, for example, I mean, privacy, and and the what Claire was saying, which was saying, you know, a lot of the domestic abuse is enabled by technology, AI plays a big part in it. Because of the enablement of tools like that, they can turn into monitoring tools. And these monitoring tools can turn into real tools of suppression. So we are very firm that, and the convention is wonderful, in the sense that it’s the first in really international convention, yes, you have the European Act, which is limited to Europe, the convention is international, alongside many other things that have happened. So for example, the digital compact at the UN, that thinks of reframing, framing human rights into the digital space, there’s been declaration happening. So there is definitely a discussion that is happening globally on how we protect safeguarded enhance human rights in the age of AI, but it’s not an easy task. Also, and and is one that needs to see all actors involved in.

Menno Ettema: Thank you very much. This is only just a small start on the discussion on AI. So we’ll come back to that in a second round. But I think what I what we’re trying to do here is to explain the various conventions that exist in a few of the various conventions that are exist related to discrimination and protection of groups that are particularly targeted, Istanbul Convention, the Lanzarote Convention. But I think I also wanted to engage with the audience here in the room and, and also online. We we launched a little Mentimeter because for us, it’s a and I’ll ask Octavian to put the mentor Mentimeter on online. Because for us, it’s very evident that human rights apply equally online as offline. But maybe we’re wrong. I was wondering what others think about this. So I have a little Mentimeter for for a little quiz to just put the finger on the polls. Octavian, are you there? Can you put the Mentimeter on please?

Octavian Sofransky: The Mentimeter is on.

Menno Ettema: We can’t see it. You have to change screens. Okay. I can assure you we tested this yesterday and it worked perfectly. But when the pressure is on, there’s always a challenge. Okay, well, Octavian is dealing with the technical challenge. And maybe I can give this floor first to Charlotte. Maybe there are already some questions from the audience online. And then I’ll go to the here’s the Mentimeter. Sorry, Charlotte. So you can scan the QR code or go to menti.com and then use the code. that is mentioned there, 29900183. So if you scan it or type it in, yes, I see people going online. Great. Then we can go to the next slide, Octavian, for the new first quiz. So are there specific human rights that are more difficult to apply online? There are four options, so please answer. Meanwhile, Charlotte, maybe I can give you the floor while people cast their votes. From online, were there any questions or comments that we should take in here in the room?

Charlotte Gilmartin: For now, there’s one comment from Peter King Quay from Liberia IGF. Their question is, what are some of the key interventions that can be suggested to increase or improve this topical problem in West Africa, especially in West Africa and the MRU region of Liberia, Sierra Leone, Guinea, and Ivory Coast, vis-a-vis these conventions and norms against women and girls, especially the Istanbul Convention? Thank you very much.

Clare McGlynn: Can I give the floor to Claire on this question? Yes, what I would add is that, as the colleague is possibly aware, the African Commission a couple of years ago did adopt a specific resolution on the protection of women against digital violence in Africa. And the work of the Special Rapporteur on the Rights of Women in Africa has done a lot of work around this regarding the online dimension. So both that specific resolution and the work of the Special Rapporteur are likely to… perhaps provide some further help and guidance on the particular issues and problems and challenges and opportunities arising in Africa. Thank

Menno Ettema: you very much Claire and I would also say that the recommendation number one of the Istanbul Convention of the Grevio, I mean it gives very practical suggestions on what can be addressed and I think this can be adjusted to, adapted to local context of course, always, that’s everywhere including the European continent but I think there are many guidelines there or suggestions that would be equally applicable in other parts of the globe. I see that the overall people, there’s a tendency to say yes all human rights or some human rights are more difficult to apply online. It’s an interesting result so yes all human rights apply online but it’s sometimes more difficult so maybe some people want to respond to that. I would like to go to the second question of the Mentimeter. What should be done more to ensure human rights online? So if some human rights are more difficult to apply online what could be done? What do you think could be done? And meanwhile I want to check the audience here if there are any questions or statements that they would like to share. Yes in the back of the room. Could you please state who you are just for the audience? Should be. It should work. Can you hear? Yes. Very well.

Audience: My name is Jaica Charles. I work with the Kenyan section for the International Commission of Jurists Nairobi. So thank you. First of all thank you for these wonderful presentations from various stakeholders. We appreciate you a lot. It’s actually something that we are very much interested in as digital rights expertise and human rights activists, sorry human rights defenders and especially on digital rights based on AI. So my question is there has been especially in African context there has been a lot of fight by the authorities towards the human rights defenders in the name of defamation. I hope we all understand defamation. So defamation has been used against human rights defenders online whenever they try to I mean to pinpoint issues regarding human rights online. They are mostly being charged under defamation and you’ll see aspects of abductions and such like things and especially it has happened recently in African context. For example in Kenya there was a Gen Z movement which was well known all over. So how can we approach that or how can we cap that especially in the context of AI to prevent such like things from happening? Like how can we I mean protect the human rights defenders online from being charged under defamation or rather being used on grounds of defamation as a tool to prevent them from doing the human rights work? Thank you.

Menno Ettema: Thank you very much. Just looking at my speakers who would like to pick up this question. And maybe I give it a go first myself and then the other colleagues can contribute. I mean, it’s a very pressing question. I think within the European scope, maybe if I may translate to that area where I’m more knowledgeable. So within the European scope, the Council of Europe and national authorities are moving away from defamation laws and legislation. I think also in the UN, this is echoed that defamation laws are not particularly helpful. And because of the way it’s often formulated and applied, that’s a problem. There are questions about now hate speech legislation, for example. And the Council of Europe has adopted a recommendation in 2022, CM Recommendation 2022-16, if you want to check it out, on combating hate speech. And it specifically explains and argues why defamation laws are not up to the task to actually deal with hate speech. And hate speech is a real problem for societies. It undermines the rights of persons targeted or groups that are targeted and undermines cohesion in communities. And I think well-crafted hate speech laws may function quite well. But well-crafted also means that we need to acknowledge the severity of hate speech. So you have hate speech that’s clearly criminal, falls under criminal responsibility. And this should be a very restrictive understanding, so we should be very clearly defined, explained what we understand, which grounds are protected under this hate speech under criminal law. Then you have other forms of hate speech that could be addressed through administrative law and civil law. For example, self-regulatory mechanisms with the media or political parties that have administrative law in place. And that is a less severe intervention when it comes to freedom of expression, Article 10 of the European Convention, for example. And it’s this balancing act. And then we have other forms of hate speech that cannot be restricted through legislation but still is harmful, so we need to address it. So I would really argue that taking inspiration from the recommendation, for example, to really engage in a national dialogue on reforming legislative situations and to really abide by a very narrow understanding of hate speech that falls under criminal law. And in the recommendation, we also refer to international UN standards and conventions that specify what falls under that. And then other forms of reactions you could do, including non-legal measures, education, awareness raising, counter speech, etc. And this would be a much better response. And defamation laws should not be used in such a way. It can be very easily misused. Well-construed hate speech laws should help. There’s also the work on SLAPs, strategic litigation, that might also give some guidance on what could be done to address misuse of legislation for silencing a group. So SLAPs, there’s a recommendation on SLAPs, and that’s quite an interesting document that could guide you in your work in that sense. Thank you. Naomi, please.

Naomi Trewinnard: Thank you. Yeah, I just wanted to perhaps share some insights of something parallel that we’ve dealt with at the level of protecting children from sexual abuse. So the convention is quite clear that professionals and all those that have a reasonable suspicion of sexual abuse in good faith should report it to the appropriate authorities, to like child protection authorities or police or whatever, but also that people who report in good faith should be protected from criminal or civil liability, so also be protected against claims of defamation. And so actually the Lansworthy Committee is looking at this question at the moment, looking how to reinforce protections for professionals so that they can respect their duties of confidentiality, their obligations to keep information safe, but also their duties to protect children. And I think it’s a really fine balancing act, but certainly I think clear guidance from states and policymakers setting out the ways in which people can be, when they’re denouncing something or reporting something, the ways that they should be protected from consequences can be very helpful as well.

Menno Ettema: Thank you Naomi for that addition. Just going to the Mentimeter, I see a few suggestions, several suggestions, thank you for that. Education, more education, content moderation, more research and data privacy laws, working on safety at all levels, physical and the online space and strengthen frameworks and their interpretation. So it’s quite an array, but I see very much also education as mentioned by a few people. Thank you. I would like to open a second section of the discussion and going back to AI because it’s the new, it’s the big elephant in the room. So I mean it’s the elephant in the room and the question is if human rights standards are in the area, for example, of gender equality, non-discriminations and the right of the child, are delicate porcelain that will soon encounter an elephant stampede or are there actually opportunities by the use of AI and should we not be so worried about the rights, human rights of these groups when it comes to the deployment of AI. Ivana, you already mentioned some aspects that, yeah, AI and human rights, they slowly come together, we need to be cautious, there are risks, but maybe there’s some more to add specifically in the area of non-discrimination and equality, also from your work in the expert committee.

Ivana Bartoletti: Octave, next slide, yes, thank you. Yeah, so I think in, I mean AI enables a lot of this, not the question that we just had, for example, about human rights, the same with journalists, no, there is a lot, there is also gender dimension of it because what happens often is that it is women who are the ones that are targeted the most and the elephant in the room is AI because AI has made a lot of this very much available, okay. So in the area of, if you think about artificial intelligence and algorithmic decision making, first we have to distinguish, it’s very important, one is what is so-called discriminative AI, which is the machine learning, what we use more traditionally, although it’s not really traditional, but in essence. So what is happening in that space, especially around algorithmic decision making, is that we are seeing women being, and especially the intersection between gender, race and other dimensions, we have seen often women being locked out of services, been discriminated against, it happened a lot, for example, with facial recognition, it happened a lot with banking services, education, now this is because AI needs data, data is central, data is scarce, often data comes the western world, therefore when this data then is the, it doesn’t work, no? It goes on and off, we can’t hear you, but it goes. When the, and therefore this bias is, exists, because it exists in society, so there is, to an extent, there is little technological solution to a problem which is a societal problem. So we have seen this bias with generative AI, we have seen another set of issues, which is, which, in the sense that, it doesn’t work. In the sense that these products are also the product of the scraping of the web, which means taking language as it is, bringing a whole set of new issues, like the language that we are all taught. talking about and learning from these tools, is it inclusive or not? So, I think there is an understanding that has become more mainstream around all of this, around the fact that discriminative AI and generative AI, in a combination of the two, can perpetuate and softwareise existing inequalities into systems that make decisions and predictions about tomorrow. However, there is also a positive use of these tools that can be where we can leverage AI to try and address some of these systems. For example, leveraging big data to understand the root causes of these inequalities. For example, understanding that there are links between discrimination or sectors and areas of discrimination, looking at big data that we wouldn’t be able to look at through human eyes. Using artificial intelligence and algorithmic systems to set a higher bar on, for example, how many women we want to work in a business, by manipulating the data. By manipulating the data, using synthetic data, by creating data sheets that enable us to improve the outputs. What I’m trying to say is that we can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so. Because if we leave it to the data alone, it’s not going to happen because data is simply representative of the world. And I think there is an understanding in the study on the challenges and opportunities that we’ve done. And I encourage everyone to read it. It’s important because we provide an understanding of where bias comes from. The fact that this bias that is detrimental to women’s human rights, to discrimination, that is dangerous. We provide a set of recommendations for states to say, how can we challenge this? How can we look at existing non-discrimination laws and see if they’re fit for the age of AI? For example, if a woman is discriminated and is not getting access to a service because she is a woman and also a black woman, how are we going to ensure that this intersectional source of discrimination is addressed by existing non-discrimination law? And furthermore, who is going to have the burden of the proof? Because if the individual who is already vulnerable in the big problem that we have, which is the unspoken figure, which is the asymmetry between us as individuals and the data and the extractivism and the complexity of what some call surveillance capitalism, in this big asymmetry, it can’t be left to the most vulnerable to say, I am going to challenge this. So this also means that there has to be strong regulation in place to make sure that the onus is on the company to provide the level of transparency, challengeability and clarity and auditability of the systems that they’re using, so that the onus is not just left on the individual to challenge, but these systems can be open to question by civic society, government and institutions. Business can play a big part in it. So what I’m trying to say here is that AI can be used, and especially if I think about automated bots, responsibly automated bots, can be great in supporting public sector, private sector to develop and create AI, which is inclusive. We can use AI, big data strategies to really understand where the bias may come from. We can look at big data analytics and really say, identify patterns of discrimination. There is a lot that can be done in this space, but there has to be that willingness to do so. So I’m really hoping that in a space like this, we can… I mean, a document like that one that brings together can be leveraged beyond Council of Europe, because it’s really important… important that we understand that existing legislation around discrimination law, privacy laws, may need to be looked at in order to be able to cater for the harms that come from algorithmic decision-making or generative AI.

Menno Ettema: Thank you very much, Ivana. That’s quite an elaborate and detailed analysis of the challenges that lie ahead, but also the opportunities. There are opportunities and possibilities. Can I give the floor to Naomi, maybe from the perspective of the risk for children’s safety and the use of AI?

Naomi Trewinnard: Sure, and thank you. Thank you for the floor. In terms of AI, the Lanzarote Committee has been paying particular attention to emerging technologies, especially over the last year or so. The committee has actually recognised that artificial intelligence is being used to facilitate sexual abuse of children. Ivana mentioned generative AI models. We know that generative AI is being used to make images of sexual abuse of children, and also that large language models are being used to facilitate grooming of children online and identification of potential victims by perpetrators. Generative AI is also being used to alter existing materials of victims. I know of cases where a child has been identified and rescued, but the images of the abuse are still circulating online, and now AI is being used to alter the images of the abuse of this child that’s been rescued to create new images of that child being abused in different ways. Then we also know that this is being used to generate completely fake images of a child, and that in some cases those fake images of a child naked or being sexually abused are being used to coerce and blackmail the child, either into making images and videos of themselves. Sometimes it’s being used to blackmail children in order to get contact details of their friends, so that the perpetrator can have a wider group of victims. And in other cases, we know of fake images being used to blackmail children for financial gain. And so all of these different forms of blackmail and abuse of children have been recognised as a form of sexual extortion against children by the Lanzarote Committee. And many mentioned at the beginning of our session, the Lanzarote Committee held a thematic discussion on this issue actually in November. So just a few weeks ago in Vienna and has adopted a declaration which sets out really some steps that states particularly can take to better protect children against these risks of emerging technologies, such as criminalising all forms of sexual exploitation and sexual abuse facilitated by emerging technologies. So looking at legislation, making sure regulation is in place, including AI generated sexual abuse material, and also ensuring that sanctions are effective and proportionate to the harm caused to victims. So historically, we’ve seen sanctions and criminal codes being much lighter for, for example, for a child sexual abuse material offence where there’s no contact with the victim. So perhaps really looking at those codes to see if that’s still effective and proportionate, given the harm that we know is being caused to children today by these technologies. On the screen there, you have a link to a background paper that was prepared for the committee, which really explores in detail, setting out the risks and the opportunities of these emerging technologies. And just to close, I wanted to mention that criminalising these behaviours is not enough. So the committee is also called on states to make use of these technologies. So as Ivana mentioned, there’s also a great opportunity here to leverage these technologies to help us better identify and safeguard victims and also to detect and investigate perpetrators. So this really requires cooperation. with the private sector, especially as regards preserving and producing electronic evidence that can be then used in court across jurisdictions, and the Cybercrime Convention, the Second Option Protocol, also provides really useful tools that states can use to better obtain evidence. So I just wanted to close by saying we’re really grateful to have this opportunity to share this with you, and we’re really interested in exchanging further with those in the room about how to cooperate to better protect children. And perhaps just lastly, to mention that the 18th of November is the annual Awareness Raising Day about sexual abuse of children, and it’s really an invitation to all of you to add that date to your calendars and to do something on the 18th of November each year to raise awareness about sexual abuse so that we can better promote and protect children’s rights. Thank you.

Menno Ettema: Thank you, Naomi, and also for mentioning the International Day, because it is awareness raising in education is a key part of resilience, but also being aware also for parents and others to support children that are in a possible target. I’ll soon give the floor again to the audience, but I just want to also give the floor to Claire on violence against women and AI. Ivana already addressed some of these points, but I’m sure Claire has some contributions to also from the graveyard’s perspective.

Clare McGlynn: So yes, I don’t know if the slide is going to come up that I’d prepared, but it’s actually just need to be very brief, because what I wanted to say follows on from Ivana, and in fact, refers to and provide the link to the report that she and Raffaele Zenedes wrote about, in fact, the opportunities of AI as well as the challenges and particularly drawing out what states could be doing and particularly things like reinforcing the rights and obligations. around taking positive action in terms of using AI to eliminate inequalities and discrimination. But the one point I’ll just add there is as well that Ivana’s report refers to the possibility that into the future, there will be other vulnerable groups that are not necessarily covered by existing anti-discrimination laws. And so we have to be very open to how experiences of inequality and discrimination might shift in the advent and the world of AI and be alive to that and be ready to take steps to help protect those individuals. Thank you.

Menno Ettema: Thank you. Octavian, this next slide didn’t come up. Maybe you could work on that. Because I think, yes, exactly. Because it’s very important to encourage people to take a quick picture. Because I think the report that Claire refers to is in particular, and Ivana also worked on and referred to as particularly useful to understand the risks of AI when it comes to discrimination, but also particularly also to gender equality or violence against women and the steps that can be taken. I think the point here is that there are new groups or new, we sometimes talk about grounds or characteristics that are particularly coming up because of the AI, intersection of data or data points create new, how do you call it, references.

Ivana Bartoletti: Algorithmic vulnerability, yeah. The point here is that you can have a, so when you think about non-discrimination laws, you think about specific grounds, right? You say you can’t be discriminated because of this ground, religion or whatever. The problem with AI is that the algorithmic discrimination, which is created by the AI itself, because it can identify, for example, can discriminate against somebody because they go on a particular website, or because at the intersection between going on a website and doing something else, this is big data, right? The algorithmic discrimination may not overlap with the traditional sources of discrimination, the grounds of discrimination. So there is a lack of overlap. So somebody may be discriminated for an algorithmic discrimination, which may not overlap with the traditional grounds that we were protected people for. So this lack of overlap is what Claire is referring to. And this is something that we need to think because we may need to look beyond the way that we’ve looked into discrimination law until now.

Menno Ettema: Yeah, thank you very much. I want to go back for a last round to the audience and also launch another little quiz with the Mentimeter. So I’ll ask Octavian to change the screen to the Mentimeter. Octavian, can you manage? Well, Octavian is trying that out. Maybe Charlotte, can I give you the floor first if there are any further comments or questions that came from the online audience?

Charlotte Gilmartin: Not just at the moment, no, no further questions. But I have put the links to the documents that all the speakers have discussed in the chat. So if any participants want to want to find the links, they should all be there.

Menno Ettema: That’s great. I take this opportunity to also mention everybody that’s in the room. The recordings will be online later on on the YouTube channel of the IGF. And there you can then also find all the links because the chat will be also visible in the recordings. Octavian, are you with us? Do you manage with the Mentimeter? Octavian? Yes, there you go. So it’s the same quiz, but in case you lost connection, you can scan against the QR code and use the numbers. I see people registering again. First question. So as I stated, the beginning AI is the elephant stampede trampling over gender equality, non-discrimination, the rights of the child. Yes, no holding them back. The AI, of course. No, elephants are skillful animals and human rights are not fragile. Or maybe, but let’s not blame the elephants. Meanwhile, are there any questions in the audience in the room? Just checking quickly. There you go. Yes. Can you hear me? Yes.

Audience: Ivana and Naomi all mentioned collaboration. So how can governments, civil society, tech companies more effectively collaborate to ensure that the online platforms are protecting and upholding rights? Can I ask who you are? Sorry, yes. I’m Mia McAllister. I’m from the US.

Menno Ettema: Great. Thank you. The question was to Claire and Ivana. Claire, would you like to start?

Clare McGlynn: No, I’m happy. Ivana has probably got more expertise in this particular aspect.

Ivana Bartoletti: So, thank you for the question. So, there are several aspects here. First of all, there are a lot of, so, there is responsibility coming from platforms and private sector, okay, which are very important. So, for example, I mean, if I think about the European Union, the DSA, which goes in that direction, content moderation, having, so, there is an issue, there is something about transparency, requiring transparency, requiring openness, requiring auditability. So, for example, one of the provisions of the DSA is that data can be accessed, and there’ll be brushing over things, but for researchers to then be able to understand what are the, what could be some of the sources of online hate or, so, and, so, there is an onus that must be placed onto companies that is important. There is AI literacy that needs to happen within education, in education settings. I always say we need people to develop a distrust by design as a way to grow with these technologies, but challenge them. You know, we need to tell people that they have to challenge all of this. It’s really important also to look at new regulation, but it’s also very, in my view, important that we create safe environments for companies and governments together to experiment. So, for example, the sandboxes are very good. There are different kinds of sandboxes, regulatory, technical, but it’s really important that companies, because there are some things that are very hard to tackle in this field, especially with generative AI. They are difficult, okay, because some of the things can be odd with the very nature of generative AI. So, having these sandboxes where you can have government, civic society to work together, to look into this product, influence this product, I think this is really, really important. So, I would push towards this kind of collaboration.

Menno Ettema: Thank you very much. Davian, could you just launch the last question, just to gather some further thoughts on what can be done more to ensure human rights in the use of AI? I just wanted to ask if there’s any other questions from the audience or online? No? Then while people answer this question, maybe a last word, a recommendation for us to take forward. We have two minutes, a minute left, so maybe Naomi, just a last final word of wisdom.

Naomi Trewinnard: Well, thank you. Yeah, I think just to reiterate again, I think the key is really collaboration and dialogue, so I think this is an excellent opportunity at the IGF to have this dialogue. For those that are interested in collaborating with the Lanzarote Committee, please do get in touch, our details are on there, and we also regularly at the Council of Europe have stakeholder consultations in the context of developing our standards and recommendations, so please tech companies do engage with us and let’s have a constructive dialogue together to better protect human rights online. Thank you, Naomi. Claire,

Clare McGlynn: last word of wisdom? Yes, I think what we need to see is greater political prioritisation and the need to move basically from the rhetoric to action, and that for me means demanding that the largest tech platforms actually act to ensure that we proactively reduce the harms online. There is a lot of very positive rhetoric, but we’ve yet to see an awful lot of action and actual change.

Ivana Bartoletti: Thank you. Ifeana? Yeah, to me it’s very much breaking that innovation versus human rights versus privacy versus security versus safety argument sometimes we hear, you know, it’s like the one hand there is the argument that we’ve got to innovate, we have to do it fast and quickly, and to do so we may have to sacrifice. Well, that is an argument that doesn’t stand, that this is Claire’s right, you know, this is where we need more action. Yeah, we need to do all,

Menno Ettema: and it’s possible to do all, through cooperation, clear standards, and clear commitment. Legal and non-legal measures, I think those are the key takeaways and the key words that I want to take forward. I thank my panellists, also my colleagues Charlotte and Octavian for the support. Thank you everyone for attending this session, and if there are any other questions, please be in touch with us through the forums on the Council of Europe website, or directly you have our details in the IGF web. Okay, thank you very much, and thank you technical team for all the support. Yes, I do. Thank you.

Audience: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

M

Menno Ettema

Speech speed

147 words per minute

Speech length

2883 words

Speech time

1174 seconds

Human rights apply equally online and offline

Explanation

Menno Ettema asserts that human rights standards should be applied in the same manner in both online and offline contexts. This implies that the protections and freedoms guaranteed by human rights laws should extend to digital spaces.

Major Discussion Point

Human Rights Standards Online

Agreed with

Octavian Sofransky

Agreed on

Human rights apply equally online and offline

Multi-stakeholder collaboration and dialogue is key

Explanation

Menno Ettema emphasizes the importance of collaboration and dialogue among various stakeholders to effectively address online human rights issues. This approach recognizes that protecting rights in the digital space requires input and action from multiple sectors.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Naomi Trewinnard

Ivana Bartoletti

Agreed on

Need for collaboration to protect rights online

O

Octavian Sofransky

Speech speed

139 words per minute

Speech length

307 words

Speech time

131 seconds

Council of Europe has developed robust human rights standards for member states

Explanation

Octavian Sofransky highlights that the Council of Europe has established comprehensive human rights standards that apply to its member states. These standards are designed to protect human rights, democracy, and the rule of law in the digital environment.

Evidence

The adoption of the Framework Convention on Artificial Intelligence and Protecting Human Rights, Democracy and the Rule of Law in May, which was opened for signature in September.

Major Discussion Point

Human Rights Standards Online

Agreed with

Menno Ettema

Agreed on

Human rights apply equally online and offline

N

Naomi Trewinnard

Speech speed

160 words per minute

Speech length

1710 words

Speech time

637 seconds

Lanzarote Convention sets standards to protect children from sexual exploitation online

Explanation

Naomi Trewinnard explains that the Lanzarote Convention establishes standards for protecting children from sexual exploitation and abuse, including in online contexts. The convention requires states to implement measures for prevention, protection, and prosecution of offenders.

Evidence

The convention criminalizes various forms of online sexual abuse, including grooming, and emphasizes the importance of international cooperation in addressing these issues.

Major Discussion Point

Human Rights Standards Online

AI is being used to facilitate sexual abuse of children online

Explanation

Naomi Trewinnard points out that artificial intelligence is being utilized to enable and exacerbate the sexual abuse of children in online environments. This includes the use of AI to generate abusive content and facilitate grooming.

Evidence

Examples include the use of generative AI to create images of sexual abuse of children and large language models being used to facilitate grooming of children online.

Major Discussion Point

Artificial Intelligence and Human Rights

Cooperation with private sector needed to obtain electronic evidence

Explanation

Naomi Trewinnard highlights the necessity of collaboration between governments and private sector companies to access and preserve electronic evidence. This cooperation is crucial for effectively investigating and prosecuting online crimes, particularly those involving child exploitation.

Evidence

Reference to the Cybercrime Convention and its Second Optional Protocol as tools for obtaining evidence across jurisdictions.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Menno Ettema

Ivana Bartoletti

Agreed on

Need for collaboration to protect rights online

C

Clare McGlynn

Speech speed

132 words per minute

Speech length

795 words

Speech time

359 seconds

Istanbul Convention addresses digital dimension of violence against women

Explanation

Clare McGlynn discusses how the Istanbul Convention has been interpreted to address the digital aspects of violence against women and girls. The convention recognizes that online and technology-facilitated violence are forms of gender-based violence that require specific attention and action.

Evidence

The adoption of a general recommendation on the digital dimension of violence against women and girls by GREVIO in 2021.

Major Discussion Point

Human Rights Standards Online

AI creates new forms of algorithmic discrimination not covered by existing laws

Explanation

Clare McGlynn points out that AI systems can create new forms of discrimination that may not be covered by traditional anti-discrimination laws. This algorithmic discrimination may affect groups that are not typically protected by existing legislation.

Major Discussion Point

Artificial Intelligence and Human Rights

Differed with

Ivana Bartoletti

Differed on

Effectiveness of existing laws in addressing AI-related discrimination

Greater political prioritization and action from tech platforms needed

Explanation

Clare McGlynn calls for increased political focus and concrete actions from major technology platforms to address online harms. She emphasizes the need to move beyond rhetoric to implement effective measures for protecting rights online.

Major Discussion Point

Collaboration to Protect Rights Online

A

Audience

Speech speed

63 words per minute

Speech length

308 words

Speech time

289 seconds

Some human rights are more difficult to apply online

Explanation

The audience response indicates a perception that certain human rights may be more challenging to implement or enforce in online contexts. This suggests that the digital environment presents unique challenges for human rights protection.

Major Discussion Point

Human Rights Standards Online

I

Ivana Bartoletti

Speech speed

139 words per minute

Speech length

2037 words

Speech time

876 seconds

AI can perpetuate and amplify existing stereotypes and biases

Explanation

Ivana Bartoletti explains that AI systems can reinforce and magnify existing societal biases and stereotypes. This occurs because AI models are trained on data that reflects historical inequalities and discriminatory patterns.

Evidence

Examples of bias in facial recognition systems and banking services that disproportionately affect women and minorities.

Major Discussion Point

Artificial Intelligence and Human Rights

Differed with

Clare McGlynn

Differed on

Effectiveness of existing laws in addressing AI-related discrimination

AI can be leveraged to address inequalities if there is political will

Explanation

Ivana Bartoletti argues that AI technologies can be used positively to identify and address societal inequalities. However, this requires intentional effort and political commitment to harness AI for social good.

Evidence

Suggestions include using big data analytics to identify patterns of discrimination and leveraging AI to set higher standards for diversity in businesses.

Major Discussion Point

Artificial Intelligence and Human Rights

Regulatory sandboxes allow government, companies and civil society to work together

Explanation

Ivana Bartoletti proposes the use of regulatory sandboxes as a collaborative approach to addressing challenges in AI governance. These sandboxes provide a safe environment for experimentation and dialogue between different stakeholders.

Evidence

Mention of different types of sandboxes (regulatory, technical) as spaces for collaboration.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Menno Ettema

Naomi Trewinnard

Agreed on

Need for collaboration to protect rights online

Agreements

Agreement Points

Human rights apply equally online and offline

Menno Ettema

Octavian Sofransky

Human rights apply equally online and offline

Council of Europe has developed robust human rights standards for member states

Both speakers emphasize that human rights standards should be applied consistently in both digital and physical spaces, with the Council of Europe playing a key role in developing these standards.

Need for collaboration to protect rights online

Menno Ettema

Naomi Trewinnard

Ivana Bartoletti

Multi-stakeholder collaboration and dialogue is key

Cooperation with private sector needed to obtain electronic evidence

Regulatory sandboxes allow government, companies and civil society to work together

These speakers agree on the importance of collaboration between various stakeholders, including governments, private sector, and civil society, to effectively address online human rights issues and challenges in AI governance.

Similar Viewpoints

Both speakers highlight the importance of specific conventions addressing digital dimensions of violence and exploitation, particularly for vulnerable groups like children and women.

Naomi Trewinnard

Clare McGlynn

Lanzarote Convention sets standards to protect children from sexual exploitation online

Istanbul Convention addresses digital dimension of violence against women

Both speakers point out that AI systems can reinforce and create new forms of discrimination, potentially affecting groups not typically protected by existing legislation.

Ivana Bartoletti

Clare McGlynn

AI can perpetuate and amplify existing stereotypes and biases

AI creates new forms of algorithmic discrimination not covered by existing laws

Unexpected Consensus

Positive potential of AI in addressing inequalities

Ivana Bartoletti

AI can be leveraged to address inequalities if there is political will

Despite the discussion largely focusing on the risks and challenges of AI, Ivana Bartoletti unexpectedly highlights the potential for AI to be used positively in addressing societal inequalities, given the right political commitment.

Overall Assessment

Summary

The main areas of agreement include the application of human rights standards online, the need for multi-stakeholder collaboration, the importance of specific conventions addressing digital violence, and the recognition of AI’s potential risks and opportunities.

Consensus level

There is a high level of consensus among the speakers on the importance of protecting human rights online and the need for collaboration. This consensus implies a strong foundation for developing and implementing effective strategies to address online human rights issues and AI governance challenges. However, there are nuanced differences in approaches and emphasis, particularly regarding the potential of AI to address inequalities.

Differences

Different Viewpoints

Effectiveness of existing laws in addressing AI-related discrimination

Clare McGlynn

Ivana Bartoletti

AI creates new forms of algorithmic discrimination not covered by existing laws

AI can perpetuate and amplify existing stereotypes and biases

While both speakers acknowledge AI’s potential for discrimination, Clare McGlynn emphasizes the inadequacy of existing laws to address new forms of algorithmic discrimination, whereas Ivana Bartoletti focuses on how AI amplifies existing biases without explicitly stating that current laws are insufficient.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of existing legal frameworks in addressing AI-related discrimination and the specific approaches to leveraging political will and tech platform action.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but have slightly different emphases or approaches. This suggests a general consensus on the importance of addressing human rights in the digital space and the challenges posed by AI, but with some variation in proposed solutions or areas of focus. These minor differences do not significantly impede the overall discussion on enhancing online safety and human rights standards.

Partial Agreements

Partial Agreements

Both speakers agree on the need for political action to address AI-related challenges, but Ivana Bartoletti emphasizes leveraging AI positively to address inequalities, while Clare McGlynn focuses on demanding action from tech platforms to reduce online harms.

Ivana Bartoletti

Clare McGlynn

AI can be leveraged to address inequalities if there is political will

Greater political prioritization and action from tech platforms needed

Similar Viewpoints

Both speakers highlight the importance of specific conventions addressing digital dimensions of violence and exploitation, particularly for vulnerable groups like children and women.

Naomi Trewinnard

Clare McGlynn

Lanzarote Convention sets standards to protect children from sexual exploitation online

Istanbul Convention addresses digital dimension of violence against women

Both speakers point out that AI systems can reinforce and create new forms of discrimination, potentially affecting groups not typically protected by existing legislation.

Ivana Bartoletti

Clare McGlynn

AI can perpetuate and amplify existing stereotypes and biases

AI creates new forms of algorithmic discrimination not covered by existing laws

Takeaways

Key Takeaways

Human rights apply equally online and offline, but some are more difficult to enforce in the digital space

Existing human rights conventions like Lanzarote and Istanbul need to be adapted for the online context

AI poses both risks (amplifying biases, facilitating abuse) and opportunities (addressing inequalities) for human rights online

Multi-stakeholder collaboration between governments, tech companies, and civil society is crucial for protecting rights online

There is a need to move from rhetoric to concrete action in enforcing human rights standards online

Resolutions and Action Items

States should criminalize all forms of sexual exploitation and abuse facilitated by emerging technologies

Governments and companies should create regulatory sandboxes to experiment with AI governance

Tech platforms need to take more proactive measures to reduce online harms

Stakeholders should engage in dialogue and consultations to develop better online protection standards

Unresolved Issues

How to effectively balance innovation with human rights protection in AI development

How to address new forms of algorithmic discrimination not covered by existing laws

How to ensure transparency and auditability of AI systems used by private companies

How to protect human rights defenders from misuse of defamation laws to silence them online

Suggested Compromises

Using AI and big data analytics to identify patterns of discrimination while ensuring privacy protections

Developing narrowly-defined hate speech laws instead of broad defamation laws to protect freedom of expression

Balancing content moderation to protect vulnerable groups while preserving free speech online

Thought Provoking Comments

The UN and the Council of Europe has clearly stated human rights apply equally online as it does offline. But how can well-established human rights standards be understood for the online space and in new digital technology?

speaker

Menno Ettema

reason

This framed the key question for the entire discussion, setting up an exploration of how existing human rights frameworks can be applied to rapidly evolving digital spaces.

impact

It set the agenda for the session and prompted speakers to address specific ways human rights standards are being adapted for online contexts.

The committee really recognises and emphasises the importance of international cooperation, including through international bodies and international meetings such as this one.

speaker

Naomi Trewinnard

reason

This highlighted the critical need for global collaboration in addressing online safety and rights issues that transcend national borders.

impact

It shifted the conversation to focus on international cooperation and multi-stakeholder approaches throughout the rest of the discussion.

If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys.

speaker

Clare McGlynn

reason

This comment emphasized the societal and cultural dimensions of online violence, moving beyond just technical or legal solutions.

impact

It broadened the scope of the discussion to include education and awareness-raising as key strategies alongside legal and technological approaches.

AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society.

speaker

Ivana Bartoletti

reason

This introduced a critical perspective on AI, highlighting its potential to exacerbate existing inequalities and human rights issues.

impact

It sparked a more nuanced discussion about both the risks and potential benefits of AI in relation to human rights and online safety.

We can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so. Because if we leave it to the data alone, it’s not going to happen because data is simply representative of the world.

speaker

Ivana Bartoletti

reason

This comment provided a balanced view on AI, acknowledging its potential for positive impact while emphasizing the need for intentional human guidance.

impact

It led to a discussion of specific ways AI could be leveraged to promote equality and human rights, shifting the tone from purely cautionary to also considering opportunities.

Overall Assessment

These key comments shaped the discussion by framing it within the context of applying existing human rights frameworks to digital spaces, emphasizing the need for international cooperation, highlighting societal dimensions beyond technical solutions, critically examining the impact of AI on human rights, and exploring the potential for AI to be leveraged positively with proper guidance. The discussion evolved from a general overview of online human rights issues to a nuanced exploration of specific challenges and opportunities, particularly in relation to AI and international collaboration.

Follow-up Questions

How can we protect human rights defenders online from being charged under defamation laws?

speaker

Jaica Charles

explanation

This is important because defamation laws are being misused to silence human rights defenders, particularly in the African context.

How can existing non-discrimination laws be adapted to address algorithmic discrimination that may not align with traditional protected grounds?

speaker

Ivana Bartoletti

explanation

This is crucial as AI systems can create new forms of discrimination that current laws may not adequately cover.

How can we leverage AI and big data to understand and address root causes of inequalities?

speaker

Ivana Bartoletti

explanation

This represents an opportunity to use AI for positive social impact and to combat discrimination.

How can governments, civil society, and tech companies more effectively collaborate to ensure online platforms are protecting and upholding rights?

speaker

Mia McAllister

explanation

Effective collaboration between these stakeholders is crucial for addressing online safety and human rights issues.

What are some key interventions to improve online safety for women and girls in West Africa, particularly in relation to the Istanbul Convention?

speaker

Peter King Quay

explanation

This highlights the need for region-specific strategies to implement global human rights standards in the digital space.

How can AI literacy be improved through education to help people critically engage with these technologies?

speaker

Ivana Bartoletti

explanation

Developing ‘distrust by design’ and critical thinking skills is important for navigating the challenges posed by AI technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.