WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse
WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse
Session at a Glance
Summary
This panel discussion focused on the ethical use of AI in combating non-consensual intimate image (NCII) abuse. Experts from various organizations, including Meta, Digital Rights Foundation, and SWGFL, explored the potential benefits and risks of using AI in this context.
The panelists emphasized the importance of putting victims and survivors at the center of any technological solutions. They discussed the need for AI systems to be adapted to different cultural and legal contexts, as current models are often trained on Western data. The experts highlighted the potential of AI in detecting and preventing NCII abuse, but also stressed the importance of maintaining human oversight and easy reporting mechanisms for users.
Privacy concerns were a significant topic, with panelists noting the sensitive nature of the data involved and the need for transparency in how AI systems handle this information. The discussion touched on the challenges of balancing the use of AI for protection with respecting user autonomy and privacy.
The panel addressed the evolving nature of online harms, including the rise of deepfakes and synthetic content. They noted that while the images may be fake, the harm to victims is real and can have severe psychological impacts.
Accountability was another key theme, with panelists discussing the need for better collaboration between platforms, law enforcement, and NGOs to hold perpetrators accountable. The experts called for more research, investment in NGOs working in this space, and the development of ethical frameworks and governance structures for AI use in combating NCII abuse.
The discussion concluded with a call for a global effort to develop AI solutions focused on safeguarding users and creating robust guardrails to protect against misuse. The panelists emphasized the need for ongoing dialogue and collaboration among various stakeholders to address this complex issue effectively.
Keypoints
Major discussion points:
– The ethical challenges and potential benefits of using AI to combat non-consensual intimate image (NCII) abuse
– The importance of putting victims/survivors at the center when developing AI tools and policies
– The need for more transparency from tech companies on how they are using AI to address NCII
– The evolving nature of NCII abuse, including the rise of AI-generated deepfakes
– The gaps in legal frameworks and accountability measures for perpetrators of NCII
The overall purpose of the discussion was to explore how AI technology could be responsibly developed and deployed to help combat NCII abuse, while considering the ethical implications and potential risks.
The tone of the discussion was thoughtful and nuanced throughout. Panelists acknowledged both the potential benefits of AI in addressing NCII as well as the ethical concerns and need for caution. There was a sense of urgency about the issue, but also recognition of the complexity involved in developing effective solutions. The tone became slightly more urgent towards the end when discussing the need for better legal frameworks and accountability measures.
Speakers
– David Wright: CEO of UK charity SWGfL and Director UK Safer Internet Centre
– Nighat Dad: Founder of Digital Rights Foundation, member of Meta Oversight Board, member of UN Secretary General’s AI High Level Advisory Board
– Karuna Nain: Online safety expert, former director of Global Safety at Facebook/Meta
– Sophie Mortimer: Manager of the Revenge Porn Helpline and Report Harmful Content Service at SWGFL
– Boris Radanovic: Head of Engagements and Partnerships at SWGFL
Additional speakers:
– Deepali Liberhan: Global Director of Safety Policy at Meta
– Niels Van Pamel: Policy Advisor, Child Focus Belgium
– Adnan A. Qadir: Senior Legal and Advocacy Advisor, SEED Foundation
Full session report
The panel discussion on the ethical use of artificial intelligence (AI) in combating non-consensual intimate image (NCII) abuse brought together experts from Meta, Digital Rights Foundation, SWGFL, and the Revenge Porn Helpline. The conversation explored the potential benefits and risks of using AI in this context, highlighting the complex challenges faced by stakeholders in addressing this sensitive issue.
Ethical Considerations and AI Implementation
A central theme of the discussion was the need to prioritize victims and survivors when developing technological solutions. Sophie Mortimer of the Revenge Porn Helpline emphasized that victim privacy and consent must be at the forefront when using AI tools. The panel debated the terminology of “victims” versus “survivors,” acknowledging the importance of empowering language while recognizing the ongoing nature of the harm.
Karuna Nain and Deepali Liberhan outlined their company’s approach to AI and safety, highlighting the potential of AI in detecting and preventing NCII abuse. They noted that AI can help with the scale and speed of content moderation, but stressed that human oversight remains essential. Boris Radanovic from SWGFL used an analogy comparing the development of AI to the Wright brothers’ plane, emphasizing the need for continuous improvement and refinement.
Nighat Dad, founder of the Digital Rights Foundation, raised a crucial point about the cultural nuances that AI systems need to account for. She noted that current AI models are often trained on Western data and contexts, potentially limiting their effectiveness in other parts of world. This observation highlighted the need for more diverse and culturally sensitive AI development to ensure global applicability.
Evolving Nature of Online Harms and Victim Support
The panel addressed the rapidly changing landscape of online harms, including the rise of deepfakes and synthetic content. Nighat Dad pointed out that while the images may be fake, the harm to victims is real and can have severe psychological impacts. The discussion also revealed changing demographics of NCII victims, with an increasing number of cases targeting men and boys.
The crucial role of helplines in providing support and resources was highlighted, with Sophie Mortimer noting that case volumes for helplines are rising exponentially. David Wright mentioned on-device hashing tools like StopNCII.org as a means of empowering victims. Karuna Nain and Sophie Mortimer provided more details about StopNCII.org, explaining how it allows users to create digital fingerprints of their intimate images without uploading the actual content, helping to prevent their distribution on participating platforms.
Legal Challenges and Platform Accountability
The discussion revealed significant gaps in current legal frameworks for addressing NCII abuse. Panelists highlighted the challenges of prosecuting NCII cases and called for better collaboration between platforms and law enforcement agencies. They emphasized the need for more research, investment in NGOs working in this space, and the development of ethical frameworks and governance structures for AI use in combating NCII abuse.
Karuna Nain called for greater transparency from tech companies about how they are using AI to combat NCII. This sentiment was shared by other panelists, who emphasized the need for platforms to improve their reporting mechanisms and cooperation with law enforcement agencies.
Gender Dynamics and Cultural Considerations
Nighat Dad discussed the gender dynamics of NCII, highlighting how societal norms and cultural contexts can exacerbate the impact on victims, particularly women and girls in conservative societies. The panel acknowledged the need for AI systems and support services to be adaptable to different cultural contexts and sensitive to the unique challenges faced by victims from diverse backgrounds.
Conclusion and Future Directions
The discussion concluded with a call for a global effort to develop AI solutions focused on safeguarding users and creating robust guardrails to protect against misuse. Key takeaways included:
1. The potential of AI to help combat NCII when implemented ethically with human oversight
2. The importance of prioritizing victim privacy, consent, and empowerment
3. The need for improved transparency from platforms and better collaboration with law enforcement
4. The crucial role of helplines and victim support services
5. The importance of adapting AI systems and support services to diverse cultural contexts
6. The need for continued research and investment in NGOs working on NCII issues
As the conversation progressed, it became clear that addressing NCII abuse requires a multifaceted approach involving technology, policy, and support services. The panelists’ insights underscored the complexity of the challenge and the need for continued research, adaptation, and collaboration to develop effective strategies in this rapidly evolving digital landscape.
Session Transcript
David Wright: to this particular workshop that we’re having, looking at, or entitled, Bridging the Gaps, AI and Ethics in Combating NCII Abuse. And NCII abuse is around non-consensual, intimate image abuse, which is a subject that we’re going to be exploring over the course of this panel. I’m David Wright, I am CEO of a UK charity, SWGFL, and a director of the UK Safe Internet Centre. We will explore, and a couple of my colleagues are here, will explain some of the, more aspects of this, some of the things that we do, particularly the Revenge Porn Helpline, and also StopNCII.org. And so we will clearly cover some of those gaps. I’m joined, in terms of this panel conversation, by a number of very esteemed guests and panellists. And I’m going to introduce those to you, just to start with. And so, we’ve got a series of questions that we’ll be asking. And so, each of the panellists, and I’ll just, first of all, introduce Negat Dad, in the middle. So, Negat is from, a founder of the Digital Rights Foundation, and also a member of the Metta Oversight Board, as well as part of the UN Secretary General’s AI High Level Advisory Board. If I next turn to Karuna, who’s joining us online. So, Karuna is an online safety expert, with two decades of experience, in the intersection of online safety, policy, government affairs, and communications. She consults with tech companies and non-profits, on their strategy policies, and technology to make the internet safer. Karuna previously served as a director, Global Safety… at Facebook, Meta, where she spent nearly a decade working on issues of child online safety and well-being, women’s safety, and suicide prevention. At Meta, she partnered with SWGFL to launch StopNCI.org to help victims of non-consensual intimate image abuse. Prior to Facebook, Karuna worked at the U.S. Embassy in India, Ernst & Young, India’s first 24×7 news channel, New Delhi Television, and German broadcaster. Karuna is a graduate of St. Stephen’s College, University of Delhi, and has completed her post-graduate studies from Albert Ludwig’s University. So, welcome to Karuna. Also joined by Deepali, sitting next to me. So, Deepali Liberhan is Global Director of Safety Policy at Meta, and has been with Meta for over a decade. She leads a team of regional safety policy experts and works on policies, tools, partnerships, and regulation across core safety issues. Also joined by one of my colleagues, Sophie Mortimer, online from the UK, where it’s rather early. Thank you, Sophie. Sophie is Manager of the Revenge Born Helpline and also the Report Harmful Content Service at SWGFL. She coordinates a team of practitioners to support adults in the UK who have been affected by the sharing of intimate images without consent and other forms of online abuse and harms. As part of the StopNCI.org team, she works with NGOs around the world to support their understanding of StopNCI and the help it can give victims and survivors in their communities. The NGO network shares learning and best practice to ensure that StopNCI evolves as a proactive tool that works for everyone, wherever they are. And finally, if I turn on to my far right, is my colleague, Boris. Boris Radanovich is an expert in the field of online safety and currently serves as the Head of Engagements and Partnerships at SWGFL. the UK-based charity, which we’ve already talked about. He works with the UK Safe Internet Centre, which is part of the European InSafe Network, in educating and raising awareness about online safety for children, parents, teachers and other stakeholders across the world. Boris has worked extensively with various European countries, including Croatia, where he worked at the Safe Internet Centre there, and has been involved in numerous missions to countries like Belarus, Serbia, Montenegro, North Macedonia, and present online strategies, online safety strategies, to government officials and NGOs. His focus is on protecting children from online threats, such as cyberbullying, child sexual exploitation and scams, as well as empowering professionals through workshops and keynote speeches. One of the key contributions today includes leading online safety education efforts, where he emphasises the evolving risks in the digital world, such as grooming and intimate image abuse. His involvement with initiatives like StopNCI.org reflects his commitment to helping prevent non-consensual sharing of intimate images. Introductions complete. So what we’ve got, I’m just going to invite all the panellists just to give us a couple of minutes introduction, and then we’ve got a series of structured questions that will open to each of the panellists, and then to everyone in the room here, and also to those of you online as well. So we will be having a really in-depth conversation about this, and based on some of the conclusions from what you now can understand, it is a very esteemed panel in this particular subject. So if I can just, Nighat, if I can throw it over to you, just a two-minute introduction into this. Thank you.
Nighat Dad: Can you hear me? Okay. Yeah, no, thank you so much, David, for organising this panel. It’s a pity that we are doing this on the last day. It should have been on the first day, because many of us have been working on the issue of non-consensual intimate imagery and videos for the last several years, and not only working on the issue and addressing it, but also looking into solutions. Of course, the Helpline, Revenge Porn Helpline in the UK, and at Digital Rights Foundation in Pakistan, we also started a Helpline, Cyber Harassment Helpline, and we collaborate together on this as well. In 2016, we started this, and the main idea was basically to address online harms that young women and girls face in a country like Pakistan. And there’s so many cultural, contextual nuances that many a times platforms are unable to capture that, and that was the main reason that why… we started the helpline not only to address these complaints by young women and girls in the country, but also to give a clear picture to the platforms that how they can actually look into their products or mechanisms, reporting mechanisms, or remedies that they are providing to different users around the world. I think I’ll just say one thing and stop there, that over the years we have seen that online harms or violence against women, or tech-facilitated gender-based violence, now we have so many names of this, but non-consensual intimate imagery around the world has very different consequences in different jurisdictions. In many part of the world, it kind of limits itself to the online spaces, but in some jurisdictions it turns into offline harm against especially marginalized groups like young women and girls. And in last couple of years, I think the very concerning thing is how AI tools are easily accessible to bad actors where they are making deep fakes and synthetic histories of women, not only normal users, but also women in the public spaces, and verifying those deep fakes I think is a challenge, not only people who have been working on this issue, but for the law enforcement, and then you just look at the larger public who absolutely have no idea how to verify this, and they just believe what they see online. And I think this is the challenge that we all are facing at the moment. I’ll stop here.
David Wright: Nigat, thank you very much. Yes, a subject we will get into without any doubt. I’m next going to just throw it to Karuna who’s joining us online. Karuna, for just a couple of introduction. Thank you.
Karuna Nain: Thank you so much, David. I do want to give a shout out to you for organizing the discussion on this topic because I don’t think we’ve done enough work or had enough dialogue as to how the power of artificial intelligence can be used to actually prevent some of this distribution of intimate imagery or to deter perpetrators online. And lastly, from also to support victims, you know, we’ve heard time and time again as to how absolutely debilitating it can be to be in that moment where you know you are worried that your intimate images are going to be shared online or they have actually been shared online and you’ve just come to know, and there’s so much that we can do with artificial intelligence to support people in that moment to give them the opportunity to actually perpetrate themselves online. So I just want to give a shout out to you for organizing this very very important discussion and I’m looking forward to hearing what you know comes out of this workshop and the kind of the ideas that are generated as to how not just tech platform but nonprofits, such as Southwest group for learning and you know they got digital rights Foundation can actually leverage to be able to support people online.
David Wright: Thank you very much. Very kind. But yeah, as you say, let’s let’s try to harness some of the power about this rather than necessarily the some of the challenges that we always see as well. So thank you very much. Next we’ll turn to Deepali.
Deepali Liberhan: Thanks David and thank you, Karina I think that that was really very informative and I think that it was very clear that we have to be very, very careful when we think about safety and multi prong so we think about a couple of things when we’re thinking about safety, we think about do we have the right policies in place on what is okay and not okay to share on the platform, we have our tools and features to give users choice and control over what over what they’re seeing and to years to be able to address some of the work that we’ve been seeing. I just want to step back a little bit and talk about how currently StopNCII.org came to being when Meta heard loud and clear from a lot of our experts, a lot of our users, that NCII is a huge issue. And Karuna was actually one of the people who was working on this. And we were able to actually move beyond just being able to address this issue at a company level on our platforms and address it at a cross-industry level. So I think there is really a genuine place for industry and civil society to come together to address some of these harms in a very scalable way, something as important as non-consensual intermediary. And we’ve also to come together and try and understand, as Karuna put it, what is the ways that we can use this technology to actually help victims or provide education or provide resources? So we do that currently on our platforms. So for example, if you look at something like NCII, or let me give you an example of suicide and self-injury, we’re able to use proactive technology to identify people who have posted content which can contain suicidal content or content referring to eating disorders. And we’re able to catch that content and able to send them resources, as well as connect them with local helplines. That is such an important way that we can use technology to make sure that people who need the help are able to get it. And sometimes there are not quick solutions, but it takes time to have discussions work together. And it’s a combination of technology and the advice of experts who are actually working on this issue to come up with solutions both to prevent that harm, to address that harm, and to provide resources and support to victims. Sorry, David, I know I took the long way. long way to this, but I just wanted to provide some context.
David Wright: Deepali, thank you very much. Next, I’m going to throw it, Sophie, to you in terms of two minute introduction. Thank you.
Sophie Mortimer: Thank you, David, and good morning, everyone. Having worked supporting survivors of intimate image abuse for over eight years, I do think that we need to approach the use of AI in providing support with caution. We know that there are advantages to be gained by the use of AI technologies in reporting harmful content at scale and with speed. However, it’s also important to remember that victims and survivors can be abused with these tools and may not want to engage with them while seeking support, because trust is understandably degraded. We in fact, we have previously worked at Southwest Grid for Learning on developing an AI support tool. And ultimately, we decided that the risks were not outweighed by the benefits, certainly not at this time. We simply couldn’t be sure that the technology could safeguard people in their time of need adequately enough. I really hope this will change because I think there is huge potential here, and that we can revisit these concepts, but it’s just really imperative that we have trust in the security of such a tool and that it prioritises the safety and wellbeing of users.
David Wright: Thank you, Sophie. And finally, Boris.
Boris Radanovic: Thank you, David. And thank you very much for organising this and good morning, everybody. I think at least a good personal note to note, it is morning. And if I’m going to call on anything in my introduction is that we all, especially the policy and the governance sector, need to wake up to the benefits and potential threats of AI. If we know anything in the last couple of decades, online safety and protection of children and adults, is that modalities of harm are changing rather rapidly. And speaking about the application of AI or the benefits of AI, we are missing. And I’m really, really glad that this is on the last day of IGF. So I hope this conversation will continue. But we are missing governance and structure and frameworks coming from and being supported by, yes, the industry, yes, the NGOs, yes, the researchers, but as well, nation states across the world. And if I can jump off a point from Karuna, absolutely, we need a broader conversation on this, of understanding, yes, the potential threats to it, but as well, emphasizing the benefits and how it can be utilized to better protect and better align with some of our policies. And I would agree with my dear colleague, Sophie, from the Revenge Porn Helpline, that currently, the threats do outweigh the benefits, and we need to make sure that advocating for the proper use of tools such as StopNCI.org and other inventive ways of solving already known problems by AI, with AI, or at least with the support of AI, is gonna be imperative going forward. And only thing that I can say, is the possible support coming out of the technology capabilities of AI is tremendous, and we need to rein that in and understand it much, much better than we do now.
David Wright: Okay, Boris and everyone, thank you ever so much. And we’re also joined from a moderation perspective, colleague Niels is also managing the online aspects of this. So those of you joining us online, if you’re asking any questions… Excellent, we’ve got one. Okay, so by way of diving into this particular issue that you’ve heard some brief introduction, in terms of specific questions, as we get down into the aspects about AI, but particularly in context of non-consensual intimate image abuse, I’m first going to turn to Nighat. So Nighat, the question to pose to you, so your advocacy for digital rights, particularly in regions with differing privacy laws, places you at the forefront of this debate. How should AI systems for NCII detection be adapted ethically to fit varying cultural and legal contexts?
Nighat Dad: Yeah, I think Sophie and Boris touched a little bit on that. Yes, we can use AI systems to our benefit as well and harness them in terms of giving speedy remedies to the victims and survivors of tech-facilitated gender-based violence. But at the same time, I think in our context, we have to be extra careful and cautious. AI systems need to solve for cultural nuance and we know that current models are trained on English and other Western contexts and languages. But I’m also hopeful and optimistic that while we are having these conversations, these conversations will lead to a new generation of AI that will better understand cultural and linguistic nuance. And I understand that sitting at the UN Secretary General’s AI high-level advisory body, we have had those conversations in the last one year where we brought global majority perspective from different angles, not just that the conversations around AI are only happening in the Global North by some Global North countries and global majority countries are not really part of those conversations. And until or unless you are not part of the conversation, you actually don’t know how to address different issues while using AI technologies or being aware of the threats and risks of these technologies. So I think these conversations are happening at different spaces. I’m glad that we are also talking about this as different helplines and those who are addressing NCII. But I think it’s also important that we cannot, we understand that we can’t solely rely on AI to combat NCII. Platforms, social media platforms still need to commit to human moderators and human review and they need to create easy pathways for users to escalate this content when automation misses it. So that’s three things that comes to my mind. Broader training for AI, continued human oversight, and user-friendly reporting mechanisms. I’d also like to see transparency and constant auditing of AI so we can see how well these automated content moderation systems are performing and transparency should be granted to civil society so that there are opportunities for third-party reviews of how these models perform. And I’d just like to plug our white paper that we released from Oversight Board which is around content moderation in the era of AI. And it sort of draws our own experiences for the last four years while dwelling into cases that we have decided and looking into so many cases related to gender and tech-facilitated gender-based violence that that users have faced on meta platforms. And we looked into the tools, we looked into the community guidelines and policies of meta and gave them really good recommendations. But also this white paper is not just for meta platforms, this is for all platforms who are actually using AI to combat harassment on their platforms. And there are so many recommendations that we have given and one of them is basically constant auditing of their AI tools on their platforms that they are using, but also giving access to third parties like researchers in terms of like what kind of feedback that they can give to the meta. And I think meta has a leverage because they have a very good initiative as a trusted partners initiative. and they can leverage that sort of ecosystem in terms of getting feedback and also providing them support who are already addressing tech facility to gender-based violence.
David Wright: Some great, really great points there and I’m really struck too by, you know, the point about the westernized data and extensive training models is a really good point and also want to recognize the global leadership that you provide in this space and have done for so many years. So it’s great to have you here and what an opportunity for everybody to ask questions too. So Megha, thank you very much. Okay, just as I sort myself out, next I’m going to turn Karuna to you in terms of question. And so now as both a trustee of ours, very important trustee of ours, thank you very much, and obviously a key advocate as well for Stop NCI having been the one with the original idea and certainly what the heavy responsibility we feel for Stop NCI in having yours being largely created it. So your question, you know, as a driving force behind Stop NCI.org, what role do you see AI playing in scaling global NCI protection efforts? What ethical principles are essential to ensuring AI tools support victims without compromising user autonomy? Karuna?
Karuna Nain: Thank you, David. And, you know, both, I think there are two questions. Yours is a two part question and both really, really important, you know, questions. The one thing that, you know, just following up from, you know, what Nughat was saying, I think there’s not been enough transparency. from the tech industry, unfortunately, as to how they’re currently leveraging the power of AI in this space. We’ve heard a lot of how they’re using AI to get ahead of, for example, child sexual abuse materials or anything related to child abuse on their platforms. But they’re not sharing enough of how they are using AI to get ahead of some of the harmful non-consensual sharing of intimate images on their platform. And credit to Meta Deepali. Meta has been one of the few companies that’s really talked about how they were able to leverage the power of AI in one way. And I’m not sure, Deepali, if you’re going to touch on this later. And forgive me if I’m sealing your thunder here. But the work that the use of AI, especially in closed secret groups where victims may not be aware that their intimate images are being shared. So using AI in those spaces to be able to proactively identify if an image or a video is potentially non-consensually shared and to pump it up to reviewers for reviewing the content and taking it down if it is NCI. I think that’s a really great example of how this technology can be used to get ahead of the harm. Because many times we’ve heard from victims that the onus and the burden on them for reporting, for trying to check if this content has been shared online, is excruciatingly painful. So I think I talked about this in my earlier opening statement as well. There are three ways, particularly I think that companies could be leveraging the power of AI to get ahead of this harm prevention. So if there are signals which they have on their platforms, if someone is, for example, updated their relationship status to say that they’ve recently been through a breakup or expressed any kind of trauma or hurt, which could potentially mean that they have intimate images which they might want to send through stopncii.org, for example, to nip the harm in its bud. Or if deterrence, if someone is trying to upload NCII, if the signals are all there, could the platform then bump up an education card to tell them that this? is actually harmful, it’s illegal in many countries, or again, you know, to really stop that abuse in its tracks and not allow that content to be shared in the first place. And third is, of course, you know, supporting victims. Again, you know, things like if someone is searching for NCIR-related resources on a search engine or on a platform, then could you bump up something like StopNCIR.org to them at that point to tell them these kind of services or these kind of support options exist, helplines exist around the world. Many victims don’t know, and this is the first time that they’re ever hearing of this abuse when they’re experiencing it. But, you know, both, all three actually, Sophie, Nighat, and Boris, all really raised very important points about thinking through some of the risks and some of the loopholes with deploying AI without being very thoughtful about it. So a few things that I’d love to list down, just, you know, things that we learned when we were building StopNCIR.org or working with Sophie and other helplines around the world on what is it that organizations really need to keep in mind when they’re building out these technologies to support victims. One, keeping victims at the center of the design, making sure that you’re not speaking on behalf of them, you’re giving them agency, you’re empowering them, but not taking any decisions on behalf of them. Two, no, you know, shaming or victim blaming. They’re under enough pressure, enough stress. This is not their, you know, mistake that, you know, intimate images are being shared. This is on the perpetrator. That’s, you know, trust. Trust is not a bad thing, you know, over here. It’s the perpetrator who’s broken the trust, and they need to feel ashamed, not the person who’s in those intimate images. You know, Nighat talked about bias and just, you know, making sure that any technology that is developed is not taking into, is taking into account other instances where, you know, this content, it may not be NCII. I’m not sure that AI, you know, is at that stage right now where it needs more training. It needs more support to be able to make sure that it’s 100% accurately identifying content as NCII and recognizing those biases. is really important part of it. Also, accountability and transparency. If tech companies are using these technologies, I’m hoping that they are, or if organizations, nonprofits are thinking about how they can use AI in this space, being transparent, being accountable, ways for people to report. Nighat talked about how important reporting still is even in these scenarios, giving people the ability to reach out to the service or the platform is really important. And of course, I will always keep harping on prevention if there are ways that this technology can be used to prevent the harm in its first place, to deter the harm, I think that a lot more work should be done over there because once the harm has happened, it’s already quite late. So the more work that can be done in that space would be really great. I’ll stop there, a lot of things that I’ve thrown out.
David Wright: Karuna, thank you very much. Can I also, perhaps we’ve made an assumption and not really introduced stopnci.org. Karuna, can I ask you to do that? Just to explain briefly to everybody what stopnci.org is and how it works.
Karuna Nain: Absolutely, and Sophie, please jump in if I’m missing anything, I know it’s your baby and I’m just talking about it. But stopnci.org, the whole goal behind stopnci.org is to support people to really stop the abuse in its tracks. The way stopnci.org works is that if you think you have, if you have intimate images, which you are worried, which will be used without your consent on any one of the participating platforms, you can use this platform to create hashes or digital fingerprints of those photos and videos and share those hashes with the participating platforms so that if anyone tries to upload that photo or that video on those participating platforms, they can get an early signal that this content may violate their policies. They can send it to their reviewers or use their technology to determine whether this violates their policies or not and stop that content from being shared on their services. So it’s really, you know, it’s very much a prevention tool. If content is being shared on platforms already, we encourage people to actually report on that platform to get fastest action. But if you’re worried that it’s going to be shared on any one of the participating platforms, in addition to that, you can use stopnci.org to stop that abuse in its tracks. Sophie, I don’t know if I missed anything and if you wanna add anything onto that.
Sophie Mortimer: Beautifully done. I would just highlight the fact that these digital hashes are created on somebody’s own device. They don’t have to send that image to anyone. And I think that’s enormously empowering and a huge step forward in the use of technology that puts the victims and survivors right at the heart of these evolutions.
Nighat Dad: Absolutely. One more thing, if I can just add, sorry, David, Sophie, is about privacy preserving way in which stopnci.org has been built. In addition to not taking those photos and videos, just taking the hashes from the victims, very minimal data is asked of the victims because we know that this is such a harrowing experience. We don’t want to stop them from using the service in any way. And I think that’s also very important as we’re talking about ethics around building of any of this AI technology, making sure whatever data is collected, it’s minimal, it’s proportionate to what is needed to run these services, but not using the data for anything than what you’re collecting it for. And you’re telling people that you’re collecting the data for and also not using the data without their consent for anything. I think it’s really, really important. Privacy and data should be at the center of the design of any AI technology that’s built in this space.
David Wright: Thank you both. Yeah, amazing kind of explanation from the two people leading this. Thank you. Next, we’re gonna come to Deepali, who we’ve already heard. So he’s Director of Global Safety and Policy at META. So Deepali, with your expertise on safety, can you talk about how META is thinking about responsible development of AI? Can you give some examples of how META is thinking about safety and AI and the challenges ahead, clearly in context of NCII?
Deepali Liberhan: Thanks, David. I think Karuna has done such a good job of it, but I’m gonna try and add some additional context. The first thing that I wanna say is just to step back and talk a little bit about how Meta is. currently using AI. So I’ve been with Meta, like I said, for about a decade. When I joined, and in fact, when Karuna joined as well, when we talked about safety and when we talked about our community standards and community standards essentially are rules, which we say clearly what is okay and not okay to post on our platforms, including NCII, including hate speech, including CSAM, we used to really encourage user reporting because we didn’t really have proactive technology built out at that time. So we were dependent on the signal that we were getting from users to be able to understand why that content is violating and rely more on human review and have that content reviewed by human reviewers to be able to take the appropriate action. Over the years, we’ve invested in really developing proactive technology to be able to catch a majority of this content even before it’s reported to us. So we know that a lot of people will see content and just not report it, or they may feel like their peers will judge them for reporting it. So proactive technology really helps to identify that content and remove it. And also it doesn’t remove the need for human reviewers, but it makes their job easier and we’re able to do this better at scale. So today, for example, we publish in our community standards, we’re able to remove a majority of the content that violates our community standards before it is even reported to us. And we’re also trying to work on understanding how our large language models can essentially help us do this better. And two ways where we think that there’s going to be an impact is going to be one is speed and the other is accuracy. Will it be able to help us identify this content even faster? And if we’re able to identify this content faster, what is the accuracy with which that we can take action in an automated way, which also lessens the time that human reviewers need to spend on really the important cases versus the cases where there’s clearly very high confidence that it is violating and therefore it can be taken down quickly. So I mean, to answer the issue in a shorter version, I think that there is a lot more scope for this technology, but there continues to be the importance of a combination of using automated technology as well as human review. So we are taking the right actions and we’re taking the appropriate actions. As we move to responsible AI, David, Mera has an open source approach to our large language models. As you know, we’ve open sourced our large language models. And therefore it’s been really important that before we do this, that we have a very thoughtful and responsible way on how we’re thinking of developing AI within this company. And I’m gonna talk about a couple of pillars that we have that we consider when we’re talking about Gen AI. Actually, I’m not gonna go through all the pillars because that’s gonna take a lot more time. I’ll go through a couple of them. The first is obviously robustness and safety. It’s really important that we do two things before we’re releasing our large language models. The first is stress testing the models. So we have teams internally and externally who stress test those models and stress testing or what we call red teaming is essentially making sure that experts are stress testing the models for finding vulnerabilities. And then we are able to identify those vulnerabilities. So to give you an example, we have specialist teams at Meta who are stress testing or red teaming the large language models. We also open it up to the larger public to be able to stress test. So for example, in Las Vegas, there’s a conference called the DEF CON Conference, and our large language models were tested. Over 2,500 hackers actually stress-tested those models and identified vulnerabilities, which we then used to inform the development of our models. The second thing is fine-tuning the models. Fine-tuning the models, essentially, is fine-tuning the models so that they’re able to give more specific responses, and especially that they are, in some cases, fine-tuned to deliver expert-backed resources. So to give an example of this, currently what happens on Facebook and Instagram is if somebody posts content where they are feeling suicidal, or for example, there are mental health issues, and either somebody reports it, or we’re able to proactively find that content, we are able to send expert-backed resources to the person, which is essentially connecting to helplines. So if you’re sitting in the UK, you will get connected to a UK helpline. If you’re sitting in India, you’ll get connected to an India helpline. This is because I think fundamentally we believe that we are not the experts on safety in terms of providing this kind of informed support. And to the point that Sophie made, we don’t want our technologies to be able to actually be providing that support. What we want to do is we want to make sure that we are making available the right tools by which the young people, or vulnerable people, or targeted people can use the right resources. So coming back to fine-tuning our AI models, the AI models will be fine-tuned through expert resources. So if somebody talks about suicide or self-injury, the response should not be that it’s going to provide you guidance. The response that it will throw up is here are a list of expert organizations that you can contact in your particular organization. And I know I’m repeating myself, but this is a really important way in which we can use these technologies to provide a level of support that we have been able to provide on other platforms like Facebook and Instagram. The third thing that I want to talk about, essentially, when we’re talking about safety and robustness is a lot of people don’t really understand AI or AI tools and AI features. So we’re also working with experts, generally, to try and ensure that they are understanding what Gen AI is. So for example, we work with experts to have resources where parents have tips on how to talk to young people about Gen AI, et cetera. So these are just a couple of things that, at a high-level overview, we think about when we’re thinking about building AI responsibly. I want to quickly cover the other pillars. I won’t talk too much about it. But the other pillars that we’re thinking about, because it’s about safety and robustness, but it’s also about making sure that there’s privacy so that we have a robust privacy review to make sure that there is transparency and control. As everybody on the panel said, it’s really important to be transparent about what you’re doing. with your Gen AI tools and products. And we are also working cross-industry to be able to develop standards, to identify provenance, to make sure that users understand whether their content is generated by AI or is not generated by AI. And the other pillar really is good governance, as we talked about, transparency, good governance, as well as, and I don’t know if Nikhat mentioned this, but fairness is really important. Fairness to ensure that there is diversity, as well as that it’s inclusive in terms of these technologies, because we all know that access to these technologies is still an issue. So, I mean, that is overall our approach to responsible AI. Let me give you one example. I know I’ve talked about robustness and safety, but in terms of fairness and inclusivity, we actually have a large language model where we are able to translate English into over 200 languages. And there are some of the lesser known languages. And I say this because in the trust and safety space, a lot of the material that we develop and a lot of the experts that exist are in English language. And this is another example of not particular to NCII, but overall in the trust and safety space that we can actually use a lot of these products and tools that have been developed to further enhance safety and make sure that this messaging is available in the languages that people really understand and not just English or Western languages. Wikipedia, for example, is using this to translate a lot of information. of their content into these languages. So I think that there is two things. There is a lot more work to be done, but I think that there is a great room for collaboration in terms of both how we prevent this, how we address it, but how do we even collaborate better in being able to support some of the people who are dealing with these issues in actually a better way than we’ve currently been able to do so. The last thing that I would say is that I know that sometimes we get asked this a lot. We have a community standards which make it very clear what kind of content is not allowed on the platform, irrespective of if it’s organic content or it’s been developed by Gen AI. If it violates our policies, we will remove that content. And we’ve updated our community standards to make that very clear as well.
David Wright: Deepali, thank you very much. It’s great as well to hear that. Also, the use of and creation of some of the tools, too. Particularly, I’m interested in the translation to different languages, which we probably all know is a real challenge. And I know from a Stop NGI perspective, we do struggle with that, trying to make it as accessible and to support as accessible as we possibly can. So thank you, Deepali, for that. And also for Meta’s help and Meta’s support with Stop NGI, too. Next, Sophie, I’m going to come to you. And so, as we’ve already heard, your work with the Revenge Porn Helpline, I know that particularly well. The question that I want to pose is about what ethical dilemmas have you observed with technology to address NCI abuse? particularly regarding privacy and consent. How do you think AI systems should be designed to respect these sensitive boundaries? Sophie?
Sophie Mortimer: Thank you, David. I think it’s a crucial question because there’s no doubt that the development of this technology is moving at pace. And I think we could all get quite carried away with what we can achieve with these technologies, but it’s so important that we put the victim and survivor experience at the centre of them. I could probably talk for quite a while on this, but I’ll try and keep it a bit tighter. But I think, crucially, the supporting the privacy of victims who in a moment of absolute crisis is really, really key. So we can use AI tools to help identify and remove non-consensual content, but that requires access to people’s very sensitive images and data. And that can be a huge concern to individuals who might fear the access of technology, because it’s technology that has participated in their abuse. They can fear data breaches or a lack of transparency in how their information is being stored and processed. So there’s a real dilemma there in balancing that need for intervention with the protection of victims and the preservation of their privacy and stopping future harm. We can use AI technologies to track the use of someone’s images. And this could be enormous. And I think Dipali referenced this in terms of the use of technology to handle the scale and the speed at which this content can move across platforms. But that just brings more complexity. So the methods for tracking content can concern victims around surveillance, or that there’s a risk of creating systems that monitor individuals more broadly than we’re. intended? How will those images and that data be used in a way that won’t impact on people’s privacy and autonomy? Then the use of people’s data is always very, very concerning. It’s very sensitive personal information used to address this harm. There can be a lack of transparency from many platforms about how these systems are used. How are the models trained? Again, the large language models that are used, and this has been referenced already, I think, by Nagat earlier, that we know that they don’t always respond as well, perhaps, to people of different cultural, religious, or ethnicities. That’s really, really challenging for the risk of presenting false positives and false negatives. Also, one area that’s often referenced is around synthetic sexual content, which is often referred to as deepfake. I think there’s a tendency to say, well, we can identify that as fake so that the harm is less. I think the evidence of some victim and survivor voices is that that just isn’t the case. I think just labelling something as fake can undermine the experience of individuals because there is a real loss of bodily autonomy and self-worth. It can cause really significant emotional distress. If we only focus on the falseness of an image, an AI system might overlook the broader psychological and social impacts on individuals. Certainly, AI can help with evidence collection and privacy. There’s a real role there in terms of watermarking or embedding metadata that helps track the origin, but then there’s more ethical questions there around consent. privacy of people’s data and do people understand? I think again it’s already been referenced that people’s access to technology and understanding of how these technologies works around the world and we can’t assume consent and it’s just really important that consent that is given is really informed and I think we’ve got a lot of work to do there to ensure that we have that but it is absolutely crucial. I also think sometimes that the technology moves fast but perpetrators of abuse move fast as well and for all the safeguards we put in we have to be aware that perpetrators are working hard to circumvent them so we need to be really flexible in our thinking and I think the priority for me is keeping that human element. Humans understand humans and can hopefully foresee some of these issues and combat ways but also to put humanity at the heart of our response to individuals who are humans themselves to state the obvious but don’t want to be supported entirely by technology, they want access to humans and that human understanding.
David Wright: Sophie thank you very much. Perhaps it is a point as well here just to talk that the term victims being used and I know we’ve had this conversation because I think there’s been, there’s often has been criticism that we shouldn’t be using this in terms of terminology, we shouldn’t be using the word victim and that shouldn’t be the case. That should be really survivor and I think I’ll perhaps if I may put words in your mouth given the conversations that we’ve had is that no, particularly from a revenge porn helpline perspective, no we very much do support victims. Our job is to make them a survivor. Now clearly anybody’s entitled to have a reference however they see fit but certainly I think we’re making the point here is that whilst our job is to make victims into survivors of particular tragic circumstances we’re not always successful which just goes to highlight the, in large, in many cases the catastrophic impact that this has, that this abuse has on individuals lives. I don’t know Sophie is there anything you want to add there? No I think you’re right David, we
Sophie Mortimer: We tend to take quite a neutral position when speaking to people because it’s not our place as a helpline to identify somebody as victim, survivor or anything else. So in practice, we reflect back to people what they say, but I completely agree that the majority of people coming to us are very much, would identify themselves as victims, because we are usually there for them quite early in that journey. And it is absolutely our aim to make them a survivor and in the hope that they can leave all of those labels behind and put this totally behind them.
David Wright: Thank you, Sophie. Nighat?
Nighat Dad: No, I think this is very interesting, because in our helpline, we also when we address folks who reach out to us, we are very careful, what do we call them? And we sort of leave it to them, what do they want themselves to call? Many times when we call them survivors, and this is like our priority, like we say, we call them survivors and not victims, because they are reaching out and they’re fighting the system. And they’re like, but I haven’t received any remedy. So I’m still a victim. So this is a very interesting conversation. And I think it should entirely be on the person who is facing all of this, to call whatever they want to call themselves, either victims or survivors. Our priority is to call them survivors, that many times we were like, I’m not that resilient. Don’t call me survivor. I don’t have that much energy left to fight back platforms or the legal system that they are dealing with.
David Wright: Yeah, Sophie, I don’t know if you want to react to that. And I’m here, I’m thinking too around, often when we’re approached by the media, that they want to speak to somebody who we’ve supported. being careful not to add a name, which we have a policy of specifically not because of the acute vulnerabilities that the individuals have. Sophie?
Sophie Mortimer: I completely agree with Nigat. It’s not our place to apply that label, and certainly the majority of people who come to us would describe themselves as a victim. In fact, I’m not sure I can recall anyone who self-identified without any prompting as a survivor, because that is not how people are feeling in that moment and in that space. The harm feels so out of people’s control, because what has happened is on platforms. We all know that images now can move quite fast, and the fear, that loss of control is the overwhelming feeling that people have when they come to services like ours. That doesn’t make anyone feel like a survivor, unfortunately, at that time. That’s why we use neutral language in that first instance and reflect back what somebody will say to them, because that’s how they’re feeling. I hope that we provide that reassurance that when they hang up the phone, they will be feeling better than they did when they picked it up.
David Wright: Thank you, Sophie. Also, for all that hard work that goes in the background as well, which I know acutely to the extent of that. Okay, finally, I’m going to turn to Boris. I say finally, too, because that’s after Boris has given us a contribution. This is when we open the floor for further, for your questions, either in reaction to anything that you’ve particularly heard, or indeed if there’s any other aspects that we perhaps haven’t covered, both within the room and also online as well. Boris, if I just turn to you. As Boris has said, he’s Head of Partnerships and Engagement at SWGFL. Given your extensive work in online safety, particularly at SWGFL, How do you see AI evolving as a tool to support NCI detection and intervention? What are the ethical frameworks do you believe are necessary to avoid potential harm to users while ensuring victim or survivor or whichever terminology we deem fit to support them? Boris.
Boris Radanovic: Thank you for that very much. I just want to say I’m really honored and proud to sit amongst heroes, definitely in the space, and thank you so much for the invitation. Thinking about AI, a quote came to my mind, and please agree or disagree with me. Specifically talking about AI, I think we know little about everything and a lot about nothing. While we fully understand the complexity of the space, whether that’s the technology behind AI or the stakeholders implementing AI, I don’t think we fully understand nor utilize the true power or the possible true power of AI. There is much more we could be doing. I know in a two-minute contribution, trying to unpack it might be a bit difficult, but I do hope that these conversations and this session reaches stakeholders from policy and government, but as well stakeholders from the industry sector. I loved the notion of stress testing and using hackers and all of that, but I would also advocate for, as we just had a conversation about users and victims and children and a lot of people that we don’t maybe fully grasp to be those first movers to test out or stress test those AI models so we can see maybe a different way of thinking. Talking about that, I think we need to go back to foundations, and the foundation is that the current models may or may not, and in some cases they do consist of, their data sets having child sexual abuse material in it, non-consensual intimate imagery in it, and many other that we probably don’t know, illegal or harmful material. So, if any, and I hope a lot of stakeholders are listening, let’s first clean out the fundamentals of the tools that we are supposed to be using. And number one, yes, you can use StopNCR.org hashes that we can help you clear out already known instances of non-consensual intimate image abuse, but we must go further. And as we spoke, and as I listened to really admirable contributions from every speaker here, I hope somebody’s listening to me and in a year’s time will prove me right that we are missing a global power force in AI development to focus on safeguarding, to focus on the guardrails, to be the solution for all of these companies that are having the same issues and problems. If there’s anybody who’s willing to work on that, SWGFL is here and definitely willing to support. But let me come back to the question about detection and intervention. I think that’s an important two-piece of a much larger, much larger picture. Yes, we can talk about detection of behaviors. And again, from a perpetrator point of view, but as well from a user’s point of view that might put them at risk without knowing it. And we need to utilize AI tools en masse to help us mitigate some of those issues. But as well, we need to talk about how do we then engage with those perpetrators after we detected it? And how do we guide those people to the right course of actions? Or what are the consequences of their repeated, and sometimes we know that those are happening on platforms, repeated offenses, or people or individuals taking part in something called a collector’s culture. With their intentionally collecting thousands and hundreds of images of other various individuals. And we know they exist on many platforms. And the question is, okay, now that we use AI to detect this, what are we going to do next? And how are we going to act on that when we’re talking about intervention? I think as well, Sophie and the Revenge Porn Helpline are already using a rather innovative way, I would say, of utilizing AI tools to help us mitigate the number of reports using a chatbot function that allows us to collect those reports. communicate and support much, much bigger, larger number of people than we would be able to do with just human support. So again, when we talk about intervention, how about user-specific, mental and health-based, legal-based support that those users can gain when or if they encounter that harm online. But as well, I think the question was about frameworks, and I think we are missing a lot, as I said in prior introductions, governments, frameworks, structures on the use, but as well, research that stands behind it. And ethical frameworks needs to be user-focused and user-centric, victim-informed or survivor-informed, most definitely, but then balancing the threat of having access to the most sensitive pieces of data that you have, and that is the data of your own or others’ abuse, and how does it unfold and to whom and where, but at the same time, extremely sensitive data sets that we might learn from and research and maybe mitigate some of those risks in the future. So I’m not trying to say this is a easy thing to do, but I’m saying that we should start combating it now before we are ending up in a much, much, I would say, more difficult space to entangle. So if I’m looking at it, what do we need to do? I think from a stakeholder company’s perspective, we need more dedication, and I support Meta, and I love our friends from all over the world involved with StopNCI.org, but we are at 12 of the biggest platforms in the world engaged with us. We need hundreds, we need thousands of platforms dedicating to this, of advocating for a solution in this space, and then bringing it forward, and definitely more investment to NGOs and researchers across the world battling in this space, because we are at the forefront, and we are non-governmental, small and agile organizations. We are meant to be at the forefront, but as we know, with every arrow, there is a long, long piece behind it that needs to be supporting us and pushing us forward. And I love the quote and the comment from Karuna about transparency. Absolutely, we need more transparency. And as well, please agree or disagree with me that in now the first movers, the first companies that we see in the AI space, correct me if I’m wrong, but it does seem to me that they’re more interested in safeguarding their intellectual property and their finances, but instead protecting and safeguarding their users. And I think that’s a big question if we move with the AI as a part of every part of our daily life, what do we value more? And I think many of us sitting here and many of us listening would advocate for the privacy and protections, but as well, safeguarding of our users first and foremost. And then we can build upon those tools. And maybe in the end, I was trying to find a picture that helps me better understand the extreme rapid rise and development of AI. And I remember, I don’t know if you saw the first movies and first pictures of the Wright brothers and the planes when we started inventing them. And then after a couple of meters, the plane crashed. Then they spent months or years developing, then a couple of dozens of meters, then hundreds of meters. So we evolved rather slowly and then more rapidly. The plane, something that brought us all here in this wonderful city of Riyadh. I think with AI, we are slowly moving at light speed pace of development, but we have no idea who’s flying and we have no idea how we’re going to land. So I fully advocate that we need to fix the foundations and invest more in clearing the data sets, invest in the NGOs around the world, battling these issues and trying to find solutions and help us all understand better and use AI better. So hopefully we can land safely and find a better and more powerful use for AI for the benefit of us all. I think that would be it. And thank you so much.
David Wright: Thank you, Boris. There’s a point to finish on, forgive me. And if anyone does have any ideas about how NTI is going to land. then we would very much like to hear that. Okay, so now we’re going to turn it to you. We’re gonna turn it over to you in terms of any particular questions that anyone has. Niels, have we got any questions? Not yet online, but I might introduce a personal question then in that case, I have the mic anyway.
Audience: First of all, thank you all very much for these very, very valuable contributions. It was a very interesting panel. For those who don’t know me, I’m Niels Van Pamel from Child Focus, which is the Belgian safer internet center. So I definitely agree with almost everything has been said here. Also with the comment of Sophie that like how deep fakes right now that we are maybe focusing too much into showing that something is a fake, but that doesn’t like, it doesn’t really matter for a victim but it’s for example, somebody who’s a victim of deep noting with fake naked pictures that everybody believes to be real anyway, right? So we’ve done a study last year on deep noting and seeing how this, first of all, how the market looks like, what is happening with young people in Belgium right now and how this is exploding in our faces right now. And we’ve seen first of all, that the long-term traumatic impact for a victim is exactly the same as for somebody compared to victims of real NCII, right? So first of all, we need to also debunk some myths. And I also wanted to add to, I think it was Boris who said that like, that we have to take into account how fast things are changing and moving right now. And also we’re jumping to conclusions. To give an example, in this study, we noticed it’s a study from 2023, that 99% of all the victims of deep noting are women and girls. But this year, 50% of the cases we opened at Child Focus were also men that were victims. So what we concluded is that in the early days, 2023, Most of the victims were girls because the data sets that were used were only working on girls and women. But right now in a world where sextortion is like perpetrators who want to sextort victims, they are using AI also much more in their behalf to guide themselves, misguide themselves. And this technology apparently now also works into having boys into, how do you say this, make deep nudes of boys. So if we don’t do research, more research in finding out how these technologies are finding more and more ways into new vulnerable groups, we might look over them. So that was maybe a comment on that. So we need longitudinal follow up and academic research. So that was my comment here.
Nighat Dad: Can I just respond to the men becoming victims of sextortion? At our helpline, when we started, we started keeping in mind that more young women and girls are actually becoming victim and survivor of this kind of crime. But we ended up getting 50% complaints from men. And starting from 2016, up till now, we never said no to them, because we started the helpline only for women. But when men reaching out to you from a context and culture where shame is so much associated with anyone, men or women, and what we noticed that young men had nowhere to turn to. There was other helplines for women for psychological support. Of course, the cyber harassment helpline was the unique one. But there were other helplines for women, but none for the men. So we ended up dealing with their complaints. Another thing that we noticed that young boys and men were hesitant to go to the law enforcement as well. And again, the culture of shame associated to it. But also, I think now this is more related to privacy. They were really scared, like women, to give their evidence to the law enforcement that how they will… will deal with or how they will protect my data when I’ll give it to them as an evidence and they will work on my case. So what they wanted basically was to report to the platform first and the helpline. So their first line of reporting was always the helpline and the platform instead of law enforcement. So I think it kind of touches upon that it’s beyond any gender or sex. This impacts everyone and especially from conservative cultures, women still find some space to talk to each other but young boys actually, they just suffer in silence.
David Wright: Yes, and then maybe to add to this comment when you said like boys are scared to go to law enforcement with evidence. I guess that’s where on-device hashing comes in.
Boris Radanovic: Wonderful, thank you so much. Niels, appreciate the comment and if I may, I think it proves to the point that both of us that the modalities of harm are changing so rapidly that even we who our job is to follow them sometimes have a difficulty. And I loved using this, especially describing deepfakes is that the image may not be real, but the harm is. And we need to understand in a fast evolving AI visual space, we now have more and more AI tools that are being developed that can, based on one prompt, design you a couple of minutes of video that that unfortunately use case will extend and unfortunately being far more wide reaching where we can have fake or digitally altered imagery than now videos that might seem or might not be real but the harm that we already now know. So we don’t need another reason, we don’t need more experiences that we know that the harm perpetrated amongst towards victims and users will be real. So thank you so much for that comment.
David Wright: Okay, just to carry on that theme and just before, again, I opened for a question here. Sophie, is there any response to that? And particularly as well, knowing the increasing call volume as well as Nagat said, changes in terms of gender? Yes.
Sophie Mortimer: interesting to hear what Naguette was saying. We certainly have always had a substantial proportion of cases of male victims affected by sextortion, though that was a proportion that rose significantly at around 2020, and hasn’t really fallen away ever since, and now makes up between a quarter and a third of our caseload. And I think it’s interesting that they’re talking about the creation of synthetic content to use in sextortion, but of course AI is also being used to generate those conversations at scale, and the presentation of the person that the victim survivor thinks that they are talking to, it can also be AI-generated. And that, of course, just ramps up the scale of these forms of abuse and, frankly, crimes. The other thing that struck me was I looked at some cases earlier in the year, and very, very harmful to them in their own communities. And I know it’s always the staple that we refer to of a woman without wearing a headscarf, but that was the reality that people are experiencing, and that can cause enormous harm. So I think we need to be aware that there are, again, broader definitions of intimacy globally, and we need to be very nuanced in our responses, but also be aware that there are, again, broader definitions of intimacy globally, and we need to be very nuanced in our responses, but also how these technologies can be used to cause other forms of harm as well. So there are huge challenges here. Not least the tenfold increase in case volume in the last four years. And there’s that. Absolutely, David. Case numbers can continue to rise year on year, and yes, certainly in the last four or five… years they have they have risen exponentially. Okay thank you very much.
Karuna Nain: David I don’t know if you can see me but you know I just wanted to follow up on the gender discussion to just check in with both Sophie and Nigat based on what they’re seeing on the helpline because the initial research that I’ve seen also indicates that usually it’s more financially motivated when it’s you know related to men and to boys versus you know with women there are other motivations which are at play. Is this consistent with what you all are seeing on your helplines or you know what are you all hearing from people who are calling in?
Nighat Dad: Yeah Corona I’ll quickly respond to that. I think it’s kind of changing from men who are public figures like politically active, human rights defenders, journalists and their intimate imagery or videos are actually one way of sort of intimidating them into silence basically. So it’s also changing from financial side to the other motives by the bad actors.
Speaker 1: Just a short point in terms of the role that companies and this is not just Meta but companies like Meta and other social media platforms can play in disseminating a lot of education as well as resources because I think that that’s really important as well and I know a lot of people mentioned sextortion so we’ve recently run a sextortion PSA in a number of countries where we worked with experts to be able to develop the exact messaging that is really important for young people and you know young women and young men too here and I think that that’s something that is you know that’s something where more of us can collaborate because I think that everybody’s doing things in isolation but I think that there is really room for collaboration in those spaces.
David Wright: Thank you very much. Okay I’m going to walk out here because I think we’ve got a question. If we can just ask you introduce yourself as well, that would be great.
Audience: Thank you. My name is Adnan. I’m Senior Legal Advisor, SEED Foundation. It’s a local NGO in Iraq, Kurdistan region. I have one question regarding, so before that, I want to thank all the panelists for their valuable insights and thoughts. I want to talk about accountability and how we can promote accountability, like these perpetrators, for example, on these platforms, for example. They are not conducting or committing one crime and leave. They will be posting or using the content maybe for someone else, later for someone else. So is there anything that those companies do with regard to holding them accountable? And the second question will be, I know that there is always a line when we talk about collaboration with courts and judicial authorities handing over evidence and materials that have been removed, because that will help accessing justice for those women. A lot of times when women seek assistance, some of them seek stopping the content or removing the content, but others, they want justice and they want the perpetrators to be accountable. Thank you.
David Wright: That’s a great question. Thank you very much. Panel?
Audience: We’ve got an online question. Carissa is asking, do existing legal frameworks hold any weight in preventing NCII, both real and AI-generated, both real and AI-generated, such as ICCPR, Article 17 for privacy? Okay. And one more question from the field. Hi, I’m a researcher based in Germany, and I have a question to the representative from META. I’m actually reporting hate speeches and, you know, like, sexually abusive content on a weekly basis. And I do it not just for work, but also on a personal level. The problem is that there are three possibilities. Only one request of mine was accepted by Meta. The second situation is that there was no response at at all, and there was no way for me to challenge or to continue my request or to send any follow-up request. And the third possibility was that there was no acceptance of my request. So in the case of wanting to follow up with my own request or challenging your meta decision, what would you suggest me to do? And I also want to ask, what is meta take on punishing the perpetrators of those images? Because I know that so far, the highest punishment is to deactivate or to delete the account. And my question to the woman in chat of a helpline, I’m sorry, I don’t remember your name. In your experiences, were there any women or gender diverse people who complained about sexual abuses? Because actually, I’m also doing research on online gender-based violence. And in my own research, there are a lot of trans teenagers or gender diverse people who actually face the issues. And also, how would you reach out to the people who don’t really understand the issues and who don’t really have any hope of addressing their issues? Thank you.
David Wright: OK. I think there’s a lot of commonality between the two questions. And perhaps, there’s one specifically for meta. And given, as well, the oversight board, there’s a relevant point. So the question about particularly to do with prosecution, the first question, anyone wish to respond?
Nighat Dad: I can respond from Meta’s perspective. So we work with law enforcement agencies across the globe. And in terms of when we get valid legal requests, we will respond with the data that is required to prosecute, which is the job of the prosecutors. We also disclose in transparency reports that we publish in terms of the number of data requests that we’ve received from authorities and how many data requests that we’ve complied with. We also have teams who work in Meta who are working directly with law enforcement authorities to ensure for the crimes, and I’m not just talking about NCII here, for the really high severity crimes that we have someone in case they need a point of contact. What I will say is there is less visibility that we have in terms of the actual prosecutions. So, for example, where we have the issue of child sexual abuse material, we are required to report as a U.S. organization to NCMEC. NCMEC then works with law enforcement authorities in terms of making sure that really sensitive data is available in a very privacy protective manner, which is available to law enforcement authorities. We don’t really have visibility on how that data is being used to prosecute the perpetrators, and I think that it’s an important link of the chain that is missing, and I think one of the things that we talk about is it’s a whole chain and somebody asks what do you do in addition to deplatforming. I think that all stakeholders have a role and we can remove the content and we can deplatform and we can work with law enforcement agencies to respond to valid requests, but I think that there needs to be a lot more transparency in terms of the prosecutions. We know that in a lot of countries, a lot of these crimes may be reported but not necessarily are prosecuted for a number of reasons including lack of capacity, lack of understanding, lack of resources or just the inability to prosecute. Yeah, responding to the researcher, so not only as a helpline and sitting at the oversight board, we actually investigated a bundle of deep fake images, one from India and one from US. And we actually recommended so many things to Meta around the gaps that we saw. But one thing that was clear to us was that Meta platforms need to create pathways for users to easily report this type of content and they must act quickly as well. And it shouldn’t matter whether the victim is a celebrity or a regular person. What we noticed that in the cases that we picked up were of the celebrity, they were public people, like public persons. But then when their content went viral, that’s where we took up the case. But what exactly is the mechanism at Meta in terms of giving importance to every user’s report is a matter of concern. I would also say that what we do as a helpline, we raise awareness a lot in different institutions, schools, colleges, try to work with the government, although it’s not their priority. But just to let people know that this kind of crime exists, but there are also remedies and who they can reach out to. And I think you raised a point around repeated offenders. And I think that’s also a point of concern for us that when people who are repeated offenders, they find a way to come back to platform and then do the same thing. And I think this is like a really question to the platform that what they do with the repeated offenders.
David Wright: Thank you, Nighat. Also to do with Carissa’s question, I think too, Sophie, I think this one, I anticipate you may have a response to as well in terms of, Nils, I think the question was around existing legal frameworks hold any weight in preventing NCII. I suspect you have a response to that one.
Sophie Mortimer: Thanks, David. I’ll try not to take too long. But quickly first on the evidence point. I think, unfortunately, even in the UK, where we’ve had legislation around the sharing of non-consensual intimate images for almost 10 years now, the collection of evidence still represents challenges and there is no consistent approach. We have 43 police forces in the UK, so consistency is always a challenge, but certainly there’s nothing really around evidence. But we have, as a helpline, provided statements to the police. We can just establish what we have done, facts, dates, the links that we have removed. And I also think there’s a bit of work around categorisation of intimate images, because I think sometimes this content, and it’s very off-putting for victims, is shared amongst multiple police officers with the prosecuting services and in courts. And that’s a massive barrier, I think, to people coming forward. And I think that we could do some work around supplying information and categories that should be accepted by courts, so that all those individuals don’t have to view content. And I think that would be quite a supportive measure to get people coming forward and supporting prosecutions. But in terms of legal frameworks, as I say, it’s nearly 10 years since the UK first got legislation in this area. And in fairness to them, it wasn’t great legislation to start with, but they responded fairly quickly, government, in that they recognised within six-ish years that the legislation wasn’t fit for purpose and a really thorough review was done. And we got new legislation at the beginning of this year, which is much more comprehensive and focuses on the consent of the person that is depicted in an image rather than the intentions or motivations of perpetrators. And that’s quite a powerful step forward, because this intention to cause distress is still quite current in other forms of legislation around the world. But there is definitely more. more to do in terms of the status of legal content. We are, this content, we are campaigning in the UK for non-consensual intimate images, particularly after conviction, to be classified as illegal content and to be treated in the same way as child sexual abuse material, to give us the same powers to remove, we are already good at removal, we have great relationships with industry, but where we can’t because there are multiple non-compliant sites whose business model is based on the sharing of this sort of content, and they don’t comply with us, and they don’t comply with other regulators, and they are hosted in countries beyond the reach of regulation. So I think it’s really important that we find other ways of leveraging the law to make this content much less visible, to give people the security that they can actually move on with their lives and not be in fear that their images are two, three clicks away from being viewed by anyone.
David Wright: Thank you. Thank you, Sophie. I wanna give a shout out too, as well, to the draft UN Cybercrime Convention that was published in August, and particularly I think UNODC’s global strategy in terms of the inclusion of NCII. It was much to our surprise, the inclusion of NCII within the new cybercrime, or at least the draft Cybercrime Convention, which we’re anticipating will be ratified next year, that all states should have laws to do with NCII. So perhaps in response to that question, will we have some today? There are some. To what weight? We’ve heard from Sophie there’s some degree, but they can prove quite porous. But then optimism around a push, a direction across the world in terms of laws that will help in this regard. I’m conscious we’ve only got. a couple of minutes left. You wanted to make a quick comment, Boris? I’ll try.
Boris Radanovic: Thank you for the questions, and as well. Far from me to say coming from an NGO and working in this space from an NGO perspective, all of the three questions come back to the same thing in my mind. We talked about accountability, legal frameworks, and then reporting. It does come back to the middle letter of this conference, and that is the Internet Governance Forum. I don’t think the scary question is what are you gonna do, Meta, TikTok, Reddit. I think the scary question is what are we gonna do, and how are we gonna define accountability for perpetrators on those platforms, develop the legal frameworks and the governance to then make sure that the platforms do follow that and then have accountability on there, and I think that’s a difficult question for us to define, and absolutely the legal frameworks, I would say, need to be more inspired around the world, more forward-looking, so we as a society in all cultures and different nation-states defined how do we approach accountability of abuse in a digital space, and how do we hold those accountable to rights, and I think it’s a far more diverse question that we need to discuss as a society than one stakeholder can answer, but I’m here for it, and if anybody has a good idea or an inspiring legal framework around the world, please do share it.
David Wright: Which will probably have to be the closing remark. Given we’ve run out of time, the transcribe’s stopped. Hopefully we’ve given you some form of response here, and also the panel, we’ve always said, so we’re a world-leading panel in terms of insight, so I pay tribute to all of your work, and I would invite everyone to show our recognition for both the extraordinary work that these people do together with the panel session as well, so thank you very much. .
Nighat Dad
Speech speed
147 words per minute
Speech length
2434 words
Speech time
987 seconds
AI models need to account for cultural nuances and non-Western contexts
Explanation
Nighat Dad emphasizes the importance of AI systems being adapted to understand cultural and linguistic nuances, especially in non-Western contexts. She points out that current AI models are often trained on English and Western data, which can lead to biases and inaccuracies when applied globally.
Evidence
Nighat mentions her experience on the UN Secretary General’s AI high-level advisory body, where they brought global majority perspectives to AI discussions.
Major Discussion Point
Challenges and Ethical Considerations in Using AI to Combat NCII
Helplines play a crucial role in providing support and resources
Explanation
Nighat Dad highlights the importance of helplines in addressing online harms, particularly for young women and girls in countries like Pakistan. She explains that helplines provide a clear picture to platforms about the contextual nuances of online abuse and offer support to victims.
Evidence
She mentions the Digital Rights Foundation’s Cyber Harassment Helpline started in 2016 to address online harms faced by young women and girls in Pakistan.
Major Discussion Point
Supporting Victims and Survivors of NCII
Platforms need better reporting mechanisms for users
Explanation
Nighat Dad emphasizes the need for social media platforms to create easier pathways for users to report content like deepfakes. She stresses the importance of quick action on reports, regardless of whether the victim is a celebrity or a regular person.
Evidence
She references the Oversight Board’s investigation of deepfake cases from India and the US, which led to recommendations for Meta to improve its reporting mechanisms.
Major Discussion Point
Supporting Victims and Survivors of NCII
Agreed with
Karuna Nain
Boris Radanovic
Agreed on
Need for transparency in AI use by platforms
Sophie Mortimer
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Victim privacy and consent must be prioritized when using AI tools
Explanation
Sophie Mortimer emphasizes the importance of approaching AI use in victim support with caution. She stresses that victims’ trust in technology may be degraded due to their experiences, and their privacy and consent must be prioritized in any AI-based support systems.
Evidence
She mentions a previous AI support tool project at Southwest Grid for Learning that was ultimately not implemented due to concerns about adequately safeguarding users.
Major Discussion Point
Challenges and Ethical Considerations in Using AI to Combat NCII
Agreed with
Nighat Dad
David Wright
Agreed on
Importance of victim-centric approaches
Differed with
Karuna Nain
Differed on
Use of AI in victim support
Victim-centric language and approaches are needed
Explanation
Sophie Mortimer discusses the importance of using neutral language when interacting with those affected by NCII. She explains that many individuals identify as victims rather than survivors when first seeking help, and it’s crucial to reflect their own language back to them.
Evidence
She shares that in her experience, most people contacting their helpline describe themselves as victims, not survivors, due to the overwhelming feeling of loss of control.
Major Discussion Point
Supporting Victims and Survivors of NCII
Agreed with
Nighat Dad
David Wright
Agreed on
Importance of victim-centric approaches
Existing laws often fall short in addressing NCII
Explanation
Sophie Mortimer discusses the limitations of current legal frameworks in addressing NCII. She highlights the need for more comprehensive legislation that focuses on the consent of the person depicted in an image rather than the intentions of perpetrators.
Evidence
She mentions the UK’s experience with NCII legislation, which was revised after about six years due to being unfit for purpose. The new legislation implemented in early 2023 is more comprehensive.
Major Discussion Point
Legal Frameworks and Accountability
Case volumes for helplines are rising exponentially
Explanation
Sophie Mortimer notes that the number of cases reported to helplines has increased dramatically in recent years. This rise in case volume highlights the growing prevalence of NCII and the increasing need for support services.
Evidence
She mentions a tenfold increase in case volume over the last four years.
Major Discussion Point
Emerging Trends and Challenges
Speaker 1
Speech speed
154 words per minute
Speech length
2049 words
Speech time
793 seconds
AI can help with scale and speed of content moderation, but human oversight is still needed
Explanation
The speaker emphasizes that while AI can significantly improve the scale and speed of content moderation, human oversight remains crucial. They stress the importance of combining automated technology with human review to ensure appropriate actions are taken.
Evidence
The speaker mentions Meta’s use of proactive technology to catch violating content before it’s reported, while still maintaining human review processes.
Major Discussion Point
Challenges and Ethical Considerations in Using AI to Combat NCII
Education and awareness efforts are important
Explanation
The speaker highlights the importance of educating users about online safety and the risks associated with NCII. They emphasize the role that social media platforms can play in disseminating educational content and resources.
Evidence
The speaker mentions Meta’s recent sextortion PSA campaign in several countries, developed in collaboration with experts.
Major Discussion Point
Supporting Victims and Survivors of NCII
Platforms need to improve cooperation with law enforcement
Explanation
The speaker discusses the importance of cooperation between social media platforms and law enforcement agencies in addressing NCII. They explain that platforms respond to valid legal requests with necessary data for prosecution, but note that there’s often a lack of visibility on the outcomes of these cases.
Evidence
The speaker mentions Meta’s transparency reports that disclose the number of data requests received from authorities and how many were complied with.
Major Discussion Point
Legal Frameworks and Accountability
Karuna Nain
Speech speed
186 words per minute
Speech length
1547 words
Speech time
497 seconds
Transparency is needed on how AI tools are being used by platforms
Explanation
Karuna Nain emphasizes the need for more transparency from tech companies about how they are leveraging AI in addressing NCII. She points out that while companies have been open about using AI for issues like child sexual abuse material, there’s less information about its use in combating NCII.
Evidence
She mentions Meta’s use of AI in closed secret groups to proactively identify potentially non-consensual content for review.
Major Discussion Point
Challenges and Ethical Considerations in Using AI to Combat NCII
Agreed with
Nighat Dad
Boris Radanovic
Agreed on
Need for transparency in AI use by platforms
Differed with
Sophie Mortimer
Differed on
Use of AI in victim support
Boris Radanovic
Speech speed
180 words per minute
Speech length
2078 words
Speech time
689 seconds
Current AI models may contain problematic training data that needs to be addressed
Explanation
Boris Radanovic raises concerns about the training data used in current AI models, which may include illegal or harmful material such as child sexual abuse content or non-consensual intimate imagery. He emphasizes the need to clean up these fundamental aspects of AI tools.
Evidence
He suggests using tools like StopNCII.org hashes to clear out known instances of non-consensual intimate image abuse from AI training data.
Major Discussion Point
Challenges and Ethical Considerations in Using AI to Combat NCII
Agreed with
Nighat Dad
Karuna Nain
Agreed on
Need for transparency in AI use by platforms
Perpetrators are using AI tools in sophisticated ways
Explanation
Boris Radanovic points out that perpetrators are increasingly using AI tools in sophisticated ways to carry out abuse. He emphasizes the need for AI systems to detect and intervene in these behaviors, while also considering how to engage with perpetrators after detection.
Major Discussion Point
Emerging Trends and Challenges
Internet governance needs to evolve to better address online harms
Explanation
Boris Radanovic argues that internet governance needs to evolve to better address online harms like NCII. He emphasizes the need for society as a whole to define how to approach accountability for digital abuse and how to hold platforms accountable.
Major Discussion Point
Legal Frameworks and Accountability
David Wright
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
On-device hashing tools like StopNCII.org empower victims
Explanation
David Wright highlights the importance of tools like StopNCII.org in empowering victims of NCII. These tools allow users to create digital fingerprints of their images without uploading them, providing a way to prevent the spread of non-consensual content.
Evidence
He describes how StopNCII.org works, creating hashes of images on the user’s device and sharing these with participating platforms to prevent upload of matching content.
Major Discussion Point
Supporting Victims and Survivors of NCII
Agreed with
Sophie Mortimer
Nighat Dad
Agreed on
Importance of victim-centric approaches
Global frameworks like the UN Cybercrime Convention are promising
Explanation
David Wright mentions the draft UN Cybercrime Convention as a promising development in addressing NCII globally. He notes that the convention includes provisions requiring all states to have laws addressing NCII.
Evidence
He references the draft UN Cybercrime Convention published in August and the UNODC’s global strategy including NCII.
Major Discussion Point
Legal Frameworks and Accountability
Audience
Speech speed
155 words per minute
Speech length
900 words
Speech time
346 seconds
Sextortion cases are increasing, including against men/boys
Explanation
An audience member notes that sextortion cases are increasing, and there’s a growing trend of men and boys becoming victims. This highlights the evolving nature of online sexual abuse and the need for support services to adapt to these changes.
Evidence
The audience member cites a study from 2023 showing 99% of deepfaking victims were women and girls, but in the current year, 50% of their cases involve male victims.
Major Discussion Point
Emerging Trends and Challenges
Agreements
Agreement Points
Need for transparency in AI use by platforms
Nighat Dad
Karuna Nain
Boris Radanovic
Platforms need better reporting mechanisms for users
Transparency is needed on how AI tools are being used by platforms
Current AI models may contain problematic training data that needs to be addressed
The speakers agree that there is a need for greater transparency from tech companies about how they are using AI to combat NCII, including better reporting mechanisms and addressing issues with training data.
Importance of victim-centric approaches
Sophie Mortimer
Nighat Dad
David Wright
Victim privacy and consent must be prioritized when using AI tools
Victim-centric language and approaches are needed
On-device hashing tools like StopNCII.org empower victims
The speakers emphasize the importance of prioritizing victim privacy, consent, and empowerment when developing and implementing AI tools to combat NCII.
Similar Viewpoints
Both speakers recognize the potential of AI in addressing NCII but emphasize the continued need for human involvement, whether in content moderation or in developing more comprehensive legal frameworks.
Sophie Mortimer
Speaker 1
AI can help with scale and speed of content moderation, but human oversight is still needed
Existing laws often fall short in addressing NCII
Unexpected Consensus
Increasing prevalence of male victims in NCII cases
Nighat Dad
Sophie Mortimer
Audience
Helplines play a crucial role in providing support and resources
Case volumes for helplines are rising exponentially
Sextortion cases are increasing, including against men/boys
There was an unexpected consensus on the increasing prevalence of male victims in NCII cases, challenging the traditional narrative that primarily focuses on women and girls as victims. This highlights the need for support services to adapt to these changing demographics.
Overall Assessment
Summary
The main areas of agreement include the need for greater transparency in AI use by platforms, the importance of victim-centric approaches, the necessity of balancing AI capabilities with human oversight, and the recognition of evolving victim demographics in NCII cases.
Consensus level
There is a moderate to high level of consensus among the speakers on these key issues. This consensus suggests a shared understanding of the complex challenges in combating NCII and the need for multifaceted approaches involving technology, policy, and support services. The implications of this consensus point towards a potential for collaborative efforts in developing more effective strategies to address NCII, while also highlighting the need for continued research and adaptation to emerging trends.
Differences
Different Viewpoints
Use of AI in victim support
Sophie Mortimer
Karuna Nain
Victim privacy and consent must be prioritized when using AI tools
Transparency is needed on how AI tools are being used by platforms
While Sophie Mortimer emphasizes caution in using AI for victim support due to privacy concerns, Karuna Nain advocates for more transparency from tech companies about how they are using AI to combat NCII.
Unexpected Differences
Gender distribution of NCII victims
Nighat Dad
Audience
Helplines play a crucial role in providing support and resources
Sextortion cases are increasing, including against men/boys
While Nighat Dad initially focused on young women and girls as primary victims, the audience member’s comment about increasing sextortion cases against men and boys revealed an unexpected shift in victim demographics. This highlights the evolving nature of NCII and the need for support services to adapt.
Overall Assessment
summary
The main areas of disagreement centered around the readiness and appropriate use of AI in combating NCII, the balance between technological solutions and human oversight, and the evolving nature of NCII victims and perpetrators.
difference_level
The level of disagreement among speakers was moderate. While there were differing perspectives on the implementation of AI and the approach to victim support, there was a general consensus on the importance of addressing NCII and the need for improved legal frameworks and platform accountability. These differences highlight the complexity of the issue and the need for a multifaceted approach involving various stakeholders.
Partial Agreements
Partial Agreements
Both speakers agree on the potential of AI in content moderation, but disagree on the current state of AI readiness. Speaker 1 emphasizes the immediate benefits of AI with human oversight, while Boris Radanovic highlights the need to first address problematic training data in AI models.
Speaker 1
Boris Radanovic
AI can help with scale and speed of content moderation, but human oversight is still needed
Current AI models may contain problematic training data that needs to be addressed
Similar Viewpoints
Both speakers recognize the potential of AI in addressing NCII but emphasize the continued need for human involvement, whether in content moderation or in developing more comprehensive legal frameworks.
Sophie Mortimer
Speaker 1
AI can help with scale and speed of content moderation, but human oversight is still needed
Existing laws often fall short in addressing NCII
Takeaways
Key Takeaways
AI has potential to help combat NCII, but must be implemented ethically with human oversight
Victim privacy, consent and cultural nuances must be prioritized when developing AI tools
Platforms need to improve transparency around AI use and cooperation with law enforcement
Helplines and victim support services play a crucial role but are facing rising case volumes
Legal frameworks for addressing NCII are improving but still have significant gaps
Emerging threats like AI-generated deepfakes pose new challenges
A multi-stakeholder approach involving industry, civil society and governments is needed
Resolutions and Action Items
Platforms should provide more transparency on how AI is being used to combat NCII
More research is needed on evolving trends and impacts of NCII across different demographics
Stakeholders should collaborate on developing ethical frameworks for AI use in this space
Efforts should be made to expand tools like StopNCII.org to more platforms
Unresolved Issues
How to effectively hold perpetrators accountable across jurisdictions
Balancing use of AI for detection/prevention with privacy and consent concerns
Addressing non-compliant websites that host NCII content
Improving consistency in evidence collection and categorization for prosecutions
Mitigating bias in AI models used for content moderation
Suggested Compromises
Using AI for initial detection but maintaining human review for final decisions
Allowing victims to choose how their data is used in reporting/removal processes
Balancing removal of content with preservation of evidence for potential prosecutions
Thought Provoking Comments
Over the years we have seen that online harms or violence against women, or tech-facilitated gender-based violence, now we have so many names of this, but non-consensual intimate imagery around the world has very different consequences in different jurisdictions. In many part of the world, it kind of limits itself to the online spaces, but in some jurisdictions it turns into offline harm against especially marginalized groups like young women and girls.
speaker
Nighat Dad
reason
This comment highlights the global variability in impacts of NCII abuse, emphasizing how cultural context shapes consequences.
impact
It broadened the discussion to consider cultural and jurisdictional differences, setting the stage for a more nuanced global perspective.
We simply couldn’t be sure that the technology could safeguard people in their time of need adequately enough. I really hope this will change because I think there is huge potential here, and that we can revisit these concepts, but it’s just really imperative that we have trust in the security of such a tool and that it prioritises the safety and wellbeing of users.
speaker
Sophie Mortimer
reason
This comment introduces a critical perspective on the limitations and risks of AI in addressing NCII abuse.
impact
It shifted the conversation to consider the ethical implications and potential drawbacks of AI solutions, balancing the earlier optimism about technology.
I think just labelling something as fake can undermine the experience of individuals because there is a real loss of bodily autonomy and self-worth. It can cause really significant emotional distress. If we only focus on the falseness of an image, an AI system might overlook the broader psychological and social impacts on individuals.
speaker
Sophie Mortimer
reason
This insight challenges the assumption that identifying fake images solves the problem, highlighting the deeper psychological impacts.
impact
It deepened the discussion on the nature of harm in NCII abuse, moving beyond technical solutions to consider emotional and social consequences.
AI systems need to solve for cultural nuance and we know that current models are trained on English and other Western contexts and languages. But I’m also hopeful and optimistic that while we are having these conversations, these conversations will lead to a new generation of AI that will better understand cultural and linguistic nuance.
speaker
Nighat Dad
reason
This comment addresses a critical limitation in current AI systems while expressing optimism for future improvements.
impact
It sparked discussion on the need for more diverse and culturally sensitive AI development, emphasizing the importance of global perspectives.
I think from a stakeholder company’s perspective, we need more dedication, and I support Meta, and I love our friends from all over the world involved with StopNCI.org, but we are at 12 of the biggest platforms in the world engaged with us. We need hundreds, we need thousands of platforms dedicating to this, of advocating for a solution in this space, and then bringing it forward, and definitely more investment to NGOs and researchers across the world battling in this space
speaker
Boris Radanovic
reason
This comment emphasizes the need for broader engagement and investment from platforms and stakeholders to address NCII abuse.
impact
It shifted the discussion towards the need for more comprehensive and collaborative approaches, highlighting the scale of the challenge.
Overall Assessment
These key comments shaped the discussion by broadening its scope from technical solutions to encompass cultural, ethical, and psychological dimensions of NCII abuse. They highlighted the complexity of the issue, emphasizing the need for nuanced, culturally sensitive approaches that go beyond simple technological fixes. The discussion evolved to consider the global variability of impacts, the limitations of current AI systems, the psychological depth of harm, and the need for broader stakeholder engagement. This multifaceted exploration led to a more comprehensive understanding of the challenges and potential solutions in combating NCII abuse.
Follow-up Questions
How can AI systems for NCII detection be adapted ethically to fit varying cultural and legal contexts?
speaker
David Wright
explanation
This is important to ensure AI tools are effective and appropriate across different regions and cultures.
What role can AI play in scaling global NCI protection efforts?
speaker
David Wright
explanation
Understanding AI’s potential in this area could help improve and expand protection efforts worldwide.
What ethical principles are essential to ensuring AI tools support victims without compromising user autonomy?
speaker
David Wright
explanation
This is crucial for developing AI tools that help victims while respecting their privacy and agency.
How can we ensure transparency and constant auditing of AI content moderation systems?
speaker
Nighat Dad
explanation
This is important for understanding how well these systems perform and identifying areas for improvement.
How can platforms create easier pathways for users to report NCII content and ensure quick action regardless of the victim’s public status?
speaker
Nighat Dad
explanation
This is crucial for improving victim support and ensuring equal treatment of all users.
How can we address the issue of repeated offenders who find ways to return to platforms after being removed?
speaker
Nighat Dad
explanation
This is important for preventing ongoing abuse and improving platform safety.
How can we improve the collection and handling of evidence in NCII cases to better support prosecutions?
speaker
Sophie Mortimer
explanation
This is crucial for improving legal outcomes and supporting victims seeking justice.
How can we develop a consistent approach to categorizing intimate images for legal purposes?
speaker
Sophie Mortimer
explanation
This could help streamline legal processes and reduce barriers for victims coming forward.
How can we leverage the law to make NCII content less visible, particularly on non-compliant sites?
speaker
Sophie Mortimer
explanation
This is important for reducing the spread of NCII and helping victims move on with their lives.
How can we develop more forward-looking legal frameworks and governance structures to address digital abuse and hold perpetrators accountable?
speaker
Boris Radanovic
explanation
This is crucial for creating effective, long-term solutions to combat NCII and other forms of online abuse.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online