WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention
25 Jun 2025 10:45h - 12:15h
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention
Session at a glance
Summary
This workshop focused on protecting children from online sexual exploitation and abuse (CSAM/CSEA), particularly in live streaming contexts, bringing together experts from civil society, academia, and the tech industry to discuss technology, policy, and prevention approaches. The discussion was structured around three main themes: integrating technology tools into trust and safety systems while protecting children’s rights, promoting cross-platform collaboration to prevent content spread, and strengthening policy frameworks to address emerging forms of abuse.
Speakers emphasized the importance of adopting a “safety by design” approach that incorporates protective technologies at both the front-end to prevent harm and back-end to detect violations. However, significant gaps remain in transparency reporting, with companies rarely providing detailed information about their moderation practices specific to live streaming. The discussion highlighted alarming trends, including that one-third of reported CSAM cases involve self-generated content, with 75% involving prepubescent children, suggesting widespread grooming and exploitation.
Cross-platform collaboration emerged as crucial since bad actors typically exploit multiple services across the tech ecosystem. The Tech Coalition’s Lantern project was presented as an example of successful signal-sharing between platforms, enabling companies to share information about violating accounts and activities while preserving privacy. Speakers stressed that technology solutions must be complemented by multi-stakeholder approaches involving human rights advocates, children themselves, parents, and law enforcement.
Policy recommendations included strengthening transparency requirements, improving age assurance mechanisms, addressing platform recidivism where banned users easily create new accounts, and enhancing complaint mechanisms designed from children’s perspectives. The discussion also emphasized the need for better law enforcement capacity and resources, as many countries lack adequate capabilities to investigate and prosecute these crimes. Participants called for equal focus on strengthening criminal justice frameworks alongside platform regulation, ensuring child-friendly court procedures, and involving financial institutions in detecting suspicious transactions related to commercial exploitation.
Keypoints
## Major Discussion Points:
– **Technology Solutions and Children’s Rights Integration**: The discussion explored how technologies to detect and prevent child sexual abuse material (CSAM) can be developed while protecting children’s rights, including safety-by-design approaches, AI-powered detection tools, and privacy-preserving methods like Apple’s on-device machine learning and metadata-based risk scoring systems.
– **Cross-Platform Collaboration and Information Sharing**: Speakers emphasized the need for coordinated efforts across platforms since perpetrators often exploit multiple services. The Tech Coalition’s “Lantern” project was highlighted as an example of secure signal-sharing between companies to identify and disrupt abuse networks more effectively.
– **Policy and Legal Framework Strengthening**: The conversation addressed gaps in national and international policies, including the need for better criminalization of emerging forms of abuse (like live streaming and self-generated content), improved law enforcement capacity, stronger transparency requirements for platforms, and enhanced complaint mechanisms accessible to children.
– **Multi-Stakeholder Approach and Youth Participation**: Multiple speakers stressed the importance of involving all stakeholders – including children themselves, civil society, academia, tech companies, and governments – in developing solutions, with particular emphasis on centering children’s voices in the design process.
– **Regional Challenges and Resource Constraints**: Participants from different regions (Africa, India, Brazil, Netherlands) highlighted varying challenges including digital literacy gaps, inadequate law enforcement resources, cultural contexts, and the growing scale of the problem versus declining resources to address it.
## Overall Purpose:
The discussion aimed to explore comprehensive approaches to protecting children from online sexual exploitation and abuse, particularly in live streaming contexts, by examining how technology, policy, and education can work together through multi-stakeholder collaboration to address this evolving global threat.
## Overall Tone:
The discussion maintained a serious, professional, and collaborative tone throughout, reflecting the gravity of the subject matter. While there was one moment of tension when an audience member criticized another participant’s comments about privacy and encryption, the moderators handled this diplomatically. The conversation was characterized by mutual respect among experts sharing knowledge, with speakers building upon each other’s points constructively. The tone remained solution-focused and forward-looking, emphasizing the urgent need for coordinated action while acknowledging the complexity of balancing child safety with privacy rights and other considerations.
Speakers
**Speakers from the provided list:**
– **Sabrina Vorbau** – Project manager at European SchoolNet, representing the InSafe network of European Safer Internet Centers, co-moderating the session
– **Deborah Vassallo** – Coordinator of the Safer Internet Center in Malta, supporting with online moderation
– **Robbert Hoving** – President of Off-Limits (the Safer Internet Center in the Netherlands), President of the InSafe board
– **Dhanaraj Thakur** – From the Center for Democracy and Technology, co-moderating the session
– **Lisa Robbins** – Online safety policy analyst at the OECD
– **Aidam Amenyah** – Executive director of Child Online Africa
– **Patricia Aurora** – CEO of Social Media Matters (working in India region)
– **Sean Litton** – President and chief executive officer at the Tech Coalition
– **Sabine Witting** – Assistant professor for Law and Digital Technologies at Leiden University, co-founder of TechLegality (consultancy firm specializing in human rights and tech)
– **Kate Ruane** – From the Center for Democracy and Technology
– **Jutta Croll** – From the Dynamic Coalition on Children’s Rights in the Digital Environment
– **Andrew Kempling** – Runs a tech and public policy consultancy, trustee of the Internet Watch Foundation
– **Sergio Tavares** – From SaferNet Brasil (safe internet center for Brazil)
– **Audience** – Various audience members who made interventions
**Additional speakers:**
– **Julien Ruffy** – Associate professor at the University of Paris 8, co-chair of the working group on internet governance
– **Shiva Bisasa** – From Trinidad and Tobago, speaking from experience with a victim of financial sextortion
– **Jameson Cruz** – Youth representative from Manaus, Brazil, part of the Brazilian Internet Young Governors Training Program since 2022
– **Cosima** – Works with the UK Government, has been with NAC for Internet Center for about six years working on digital literacy and policy
– **Raoul Plummer** – With the Electronic Frontier Finland
– **Unnamed speaker** – From the Finnish Green Party
Full session report
# Workshop Report: Protecting Children from Online Sexual Exploitation and Abuse in Live Streaming Contexts
## Executive Summary
This 90-minute workshop brought together international experts to address protecting children from online sexual exploitation and abuse (CSEA), with particular focus on live streaming contexts. The session was structured in three discussion rounds covering technology solutions, cross-platform collaboration, and policy frameworks. Participants included representatives from civil society organisations, academic institutions, technology companies, and government bodies, creating a multi-stakeholder dialogue on current challenges and potential solutions.
Key themes that emerged included the need for better age assurance mechanisms, the multi-platform nature of modern abuse, concerning trends in self-generated content, and significant gaps in law enforcement capacity globally. While participants agreed on the importance of multi-stakeholder collaboration and centering children’s voices, disagreements emerged around privacy-preserving approaches and policy priorities.
## Key Participants
The workshop was co-moderated by **Sabrina Vorbau** from European SchoolNet and **Dhanaraj Thakur** from the Center for Democracy and Technology, with **Deborah Vassallo** from Malta’s Safer Internet Centre providing online moderation support.
Participants included:
– **Robbert Hoving**, President of Off-Limits and the InSafe board
– **Aidam Amenyah** from Child Online Africa
– **Patricia Aurora** from Social Media Matters (India)
– **Sergio Tavares** from SaferNet Brasil
– **Sean Litton** from the Tech Coalition
– **Sabine Witting** from Leiden University
– **Lisa Robbins** from the OECD
– **Kate Ruane** from the Center for Democracy and Technology
– **Jutta Croll** from the Dynamic Coalition on Children’s Rights in the Digital Environment
– **Jameson Cruz**, youth representative from Brazil’s Internet Young Governors Training Programme
– **Andrew Kempling** from the Internet Watch Foundation
– **Shiva Bisasa** from the Internet Society Uganda Chapter
## Rights-Based Framework for Child Protection
**Lisa Robbins** established an important foundational perspective, arguing that “protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health.”
This framing positioned child protection as inherently rights-affirming rather than creating a safety-versus-rights dichotomy.
## Technology Solutions and Current Gaps
### Age Assurance Challenges
**Lisa Robbins** revealed a significant gap in current practices: “Only two of 50 services systematically assure age on account creation.” This represents a fundamental vulnerability in child protection systems across platforms.
**Andrew Kempling** emphasized that “age verification is important to prevent adults from accessing child accounts and content,” while **Jutta Croll** noted that privacy-preserving age verification mechanisms exist without being intrusive.
### AI and Detection Technologies
**Aidam Amenyah** stressed that “AI-powered platforms should be sensitive enough to detect, prevent, and report live streaming situations affecting children,” noting that children in Africa encounter concerning content virtually every day.
**Sean Litton** described privacy-preserving detection methods, including how “session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content” and how “on-device machine learning can detect nudity while preserving privacy through local processing.”
## Cross-Platform Nature of Abuse
**Sean Litton** provided crucial insight into modern exploitation patterns: “The bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms.”
This revelation highlighted why isolated company responses are insufficient and coordinated industry collaboration is essential.
### Project Lantern
**Sean Litton** presented the Tech Coalition’s Lantern project as an example of cross-platform collaboration, enabling secure signal sharing between companies about policy-violating accounts while preserving privacy. The project is piloting with payment providers to share signals on financial transactions, recognizing that financial extortion is a major component of crimes against children online.
However, **Sabine Witting** raised concerns that “Global South representation is often underrepresented in industry standard-making processes,” while **Kate Ruane** emphasized that cross-platform efforts “create significant risks for human rights and need multi-stakeholder engagement for transparency.”
## Alarming Statistics on Self-Generated Content
**Robbert Hoving** presented concerning data: “Of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves… And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children.”
**Sabine Witting** argued that self-generated content requires nuanced approaches considering voluntary versus coercive production, challenging simple criminalization approaches.
## Global Scale and Regional Perspectives
### Staggering Global Statistics
**Andrew Kempling** provided sobering context: “roughly 300 million victims of child sexual abuse and exploitation every year globally. That’s about 14% of the world’s children each year.”
### Regional Challenges
**Patricia Aurora** shared specific data from India: “9 million children out of the 36.2 million population of India have been targeted.” She noted that India lacks robust legal frameworks for child protection online and highlighted that live streaming platforms are being used for monetizing pre-recorded child abuse content.
**Aidam Amenyah** emphasized that Africa faces uneven digital literacy alongside rapid technology growth, and that laws are not effectively enforced while internet service providers lack accountability.
**Sergio Tavares** raised sustainability concerns, noting that Brazil faces growing numbers of reports while resources are declining globally.
## Law Enforcement and Criminal Justice Gaps
**Sabine Witting** made a critical observation about policy priorities: “I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized.”
She also noted that criminal justice systems lack protective measures and trauma-informed approaches for child victims.
## Privacy and Encryption Debate
**Kate Ruane** defended end-to-end encryption as essential protection, arguing that “privacy and safety should be viewed as complementary rather than opposing values” and that “content-oblivious methods are more effective than content-aware methods for most harmful content.”
One participant argued that privacy was being “weaponized” as an excuse not to stop CSAM sharing on encrypted platforms. **Raoul Plummer** stated his disagreement with suggestions that privacy advocates were somehow complicit in child abuse.
## Personal Impact and Human Cost
**Shiva Bisasa** provided powerful personal testimony about the human impact: “I know of a victim of financial sextortion who took his life because of this.” This reminder of the real-world consequences underscored the urgency of the issues being discussed.
## Children’s Participation
Multiple speakers emphasized involving children in solution design. **Aidam Amenyah** noted that “children want involvement in designing protection solutions to ensure they are fit for purpose,” while the inclusion of **Jameson Cruz** as a youth representative demonstrated practical implementation of this principle.
## Financial Aspects
**Sean Litton** highlighted that “financial extortion is a major component of crimes against children online,” while **Sabine Witting** noted that “following the money is one of the most important leads for law enforcement in organized crime.”
## Transparency and Accountability
**Lisa Robbins** noted that “companies rarely provide detailed information about their moderation practices specific to live streaming,” making it difficult for policymakers to understand where targeted action is needed. She emphasized that “better and more granular transparency and information is really key for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed.”
## Areas of Agreement and Disagreement
Participants generally agreed on the need for multi-stakeholder collaboration, better law enforcement capacity, improved platform transparency, and centering children’s voices in solution design.
However, significant disagreements emerged around privacy-preserving approaches versus content scanning, whether policy focus should prioritize platform regulation or criminal justice strengthening, and how to implement cross-platform collaboration with appropriate oversight.
## Commitments and Next Steps
Several concrete commitments emerged from the discussion:
– The Tech Coalition committed to publishing results of their financial payment provider pilot
– The OECD agreed to share Financial Action Task Force research on disrupting financial flows related to live stream sexual abuse
– The Internet Watch Foundation offered to collaborate with tech companies to validate tool effectiveness
– Participants agreed to continue conversations at the IGF village
## Conclusion
The workshop revealed both the complexity of protecting children from online sexual exploitation and the potential for coordinated action across sectors. The alarming statistics about self-generated content involving very young children, the multi-platform nature of modern abuse, and significant gaps in law enforcement capacity underscore the urgency of coordinated action.
While disagreements remain about implementation approaches and policy priorities, the consistent emphasis on centering children’s voices and the recognition that no single entity can solve these problems alone provides a foundation for continued collaboration. The path forward requires sustained commitment from all stakeholders, adequate resource allocation, and continued innovation in developing solutions that protect children while preserving their rights and digital participation.
Session transcript
Sabrina Vorbau: Wow, good morning, everyone. It’s nice to see you all, and welcome to the workshop on protecting children from online sexual exploitation, including live streaming spaces, technology, policy, and prevention. My name is Sabrina Forbaugh, I’m a project manager at European SchoolNet and representing here the InSafe network of European Safer Internet Centers. I will be co-moderating the session together with my colleague, Danarai Thakur, from the Center for Democracy and Technology, and we are also joined by our colleague, Deborah Vassallo, who is the coordinator of the Safer Internet Center in Malta, and she will be supporting us with the online moderation. Welcome everyone here in the room and also online for joining our session and for your participation. The session aims of exploring how technology, policy, and education can work together to tackle the evolving threat of online child sexual exploitation and abuse, also known as CSAM or CSEA, particularly in the live streaming context, but not exclusively. We are really fortunate to be joined by an incredible group of experts from civil society, academia, and the tech industry who will share their insight from different regions and perspectives, truly in the multi-stakeholder approach. of the IGF. We have structured our session that will run for 90 minutes in three parts where we will tackle different angles of the subject. Before we dive into the first round let me briefly introduce our speakers. Here in the room with us we have next to me Robert Hoving who is president of Off-Limits, the Safer Internet Center in the Netherlands and he’s also president of the Inhofe board and we have Kate Rouhani with us. She is also from the Center of Democracy and Technology. Virtually and online joining us from various parts of the world we have Sean Lytton who is president and chief executive officer at the Tech Coalition. We have Dr. Sabine K. Witting, professor at Leiden University. Patricia Aurora, CEO of Social Media Matters. Abo Aydam Achmea, execute director of Child Online Africa and Lisa Robbins, online safety policy analyst at the OSCD. So lots of expertise on our side and I see also loads of expertise in the room. So let’s begin by exploring how technology tools are currently being developed and implement to dedicate and to prevent CSAM in live streaming spaces but not exclusively and how can they ensure that these tools align with children’s rights and safety. So we will turn a first question to our speakers and after each round we will take a pause and invite you to share your intervention and your questions with us and if you would like to do so please just line up here in front of the microphone. So our first question how can technology technologies to prevent CSAM, be integrated and trust and safety systems in a way that protects and promotes children’s rights. And we will start with our first speaker joining us online. And I give the floor over to Lisa that we can see on the screen. Hi, Lisa. Welcome. Thanks for joining us. Hi, thanks so much. And hopefully you can hear me well. Wonderful. Yes. Great. So thank you so much.
Lisa Robbins: It’s really great to be here on this really rich panel for this super important conversation. So just for a tiny little bit of context, for those of you who don’t know the OECD, we’re a multilateral organization. We have a 38 member country membership, and we work on a multilateral on a consensus basis to develop research and evidence policy recommendations on lots of issues, but including digital policy and including within that digital safety and specifically work on children in the digital environment. My first remark in response to the question is that I think it’s really important to acknowledge up front that protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health. So while it is really important to recognize where there are tensions between our rights in our solutions and in policy solutions, which is the angle that I come from, it’s really also important to recognize that there is a direct child rights implication on a number of layers of CSCA itself. Secondly, just to break down the question of what is it is that we mean by technologies. I guess in the question, are we referring to companies themselves that provide live streaming or safety technologies? And how can this be integrated and how safety technologies could be integrated into those services to provide safeguard for children’s? Or are we talking about a combination of both? And so for the interest of time, I’ll tackle the question really from the perspective of both. And I’m gonna highlight the importance of taking a safety by design approach. And I know that that’s not a technology in itself, but there are technologies that underline safety by design. Here at the OECD, we’ve done a little bit of work here to really understand what this concept means. And we last year published a report that posits eight key components for digital safety by design. These roughly fall into three buckets. I’m not gonna talk about one of them, which is more about corporate social responsibility and an environment of safety. But two of the buckets, one of which is putting in technologies and tools at the front end to prevent harm from occurring, and then putting in tools and technologies at the backend to detect harm should it occur otherwise really does require incorporating certain technologies into service design and delivery. So two, we also have done some research here which can shed some light on how companies are actually doing in this space and what is actually occurring in relation to the incorporation of technologies into companies to meet those sort of two aspects of what at the front end and also as a safety net. Firstly, we’ve done a series of reports looking at company practices for transparency reporting and there are other public facing policies and governing documents relevant to child sexual exploitation and abuse. Now we look at a wide number of things such as how CSEA is defined on a platform, how they enforce their. policies, moderation practices, and transparency reporting practices. But one of the things we do look at is how they detect CSEA. Now, we’ve done two reports relevant to this. One report was published in 2023, and another one will be published this coming Monday. And in the one that’s to be coming on Monday, it shows that companies really rarely provide detailed or clear information on their moderation practices. And now we don’t specifically break this down in the report. I did a quick scan of the some 80 services we looked at relevant to live streaming. And now only two of those, taking aside those services who sold businesses live streaming, only two of those provide information on their moderation practices specific to live streaming. And again, only a few provide metrics that break down their live streaming incidences. And so really for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed, better and more granular transparency and information is really key. Now, I realize I’m about to run out of time, but we’ve also done a similar exercise relevant to age-related policies. This paper is out today. And just to say really briefly, it’s clear from the research we’ve done that age assurance, and I know this is a hot topic on a lot of people’s mind, but we’re not quite there yet in achieving age assurance. We identify this as a key component of safety by design because companies need to know who their users are to put in place child protective safeguards. But of the 50 services we looked at here, only two systematically assure age on account creation. And so there still is gaps to be done in relation to age assurance. And I’ll leave it there. Thank you so much, Lisa, for kicking us off and sharing already with us the incredible work the OSCD is doing at global level, mentioning a lot of publications, and we will also make sure to include this in our session report. And you mentioned very interesting areas. And I think that’s a very good kick-off for this first round. We will now turn to Avo, who is from Child Online Africa, also a very big advocate on children’s rights, and we will hear from him a little bit more about what he thinks about protection of children’s rights. So, Avo, thank you very much for being here. Thank you very much for having me.
Sabrina Vorbau: Thank you for having me. We will now turn to Avo, who is from Child Online Africa, also a very big advocate on children’s rights, and we will hear a little bit more from Avo what is happening in the space, particularly in the Africa region. I hope, Avo, you can hear us. And the floor would be yours. Yes, I can hear you.
Aidam Amenyah: Good morning. Hello, everyone. Pleasure connecting with you from Accra, Ghana. I’m going to build on what the background Lisa gave, because we don’t want to burn time. We want to build on what each other is presenting. So as much as possible, as you are all aware, Africa has a unique situation where we have an even situation of digital literacy, and the rapid growth of technology is helping to some extent, but it’s also exposing a number of incidences. And at Child Online Africa, our focus is to give children what they want, what they need in order to be online. So by so doing, we interact with these young people a lot to get things done. And one of the things they make us understand is that virtually almost every day, there’s something that they encounter in the space. So we’re asking, okay, so what can be done in order to safeguard your interests and your engagement in the space? And they made us understand that. understand that it’s important that now that AI is there, the platform is made in such a way that is sensitive to detect, prevent, and also report streaming of live situations that affect children by extension. And the moment they mentioned AI, you ask them, okay, so what can you do yourself? And so, okay, we can try to code something, but we haven’t gotten there yet. But then we are of the view that as a growing environment where young people are largely involved in the space, it will be important for us to look at protocols that allow laws to be enforced effectively, because one of the things they came up with is the fact that the laws are not biting and it looks like they are not working effectively as far as our country, our region is concerned. So they’re looking at the involvement of children in local communities to design solutions which affect them so that it will be foolproof. It wouldn’t be like adults sitting to come up with designs that do not really, really impact on children. And they also recommended that there should be human oversight on AI moderation to prevent false positives if possible. And again, there’s also the need, they said we should put a lot of responsibility on internet service providers, because they should be able to block some of the live streams of children, but they are not doing that enough. And all because there are no laws holding them accountable. So they will feel that if the service providers will be made a lot more responsible, the system will be good enough and conducive enough for them. But their involvement is very key in ensuring that the design for the protection solution. is fit for purpose and is intuitive for them to interact with. So by extension, they felt that it’s important for platforms to be faster in identifying incidences of abuse and rescuing young people if they show fall victims, prosecution of perpetrators, and also by also creating, making, having at the back of the mind of designers that children are using the platform. Even though they don’t give money, they don’t buy anything, they are also consumers and their interest should be taken on board and taken seriously. That’s what I can say for now, thank you.
Sabrina Vorbau: Thank you so much, Avo, and also for highlighting already youth participation in the whole process. So really making young people part of the development process, but also policymaking, the design as well, and definitely call for action for more responsibilities for the platforms. Thank you for now. We will now go to our next speaker, Patricia Aurora. We talked about platforms already. Patricia works for Social Media Matters, specifically in the India region. I hope, Patricia, you can hear us and the floor would be yours now. Thank you for joining us.
Patricia Aurora: Thank you, Sabrina, thank you for the opportunity. I’m so happy to be here, joining virtually, giving the context of our child safety work in India and what we have been doing, and how India is basically shaping the laws around child protection in India. Given that, just a brief introduction about who I am for the audience over there. So I’m Patricia Aurora. I’m currently leading Social Media Matters, an organization that’s working. in at the intersection of digital safety, child rights, and online harm prevention in India. Basically, I’ve been engaged in shaping conversations around child protection, digital literacy, and trust and safety frameworks. And hearing Lisa and Avu so far on inserting the context, I think so now this adds value when we talk about the ongoing initiatives that we have been doing in India for over a decade now. And this has given us a clear standing that what we need as children and their voices to be heard by many. Right now in India, what we come across is that many children who are facing any form of abuse or harm, they do not feel or they do not know the path of how to kind of report the situation, even if it is a form of online harm like cyberbullying. This is one area which is talked about across India. Cyberbullying amongst minors is a major issue in India. And when we’re talking about technology, and when we’re talking about policies, I think the tech platforms have a major role over here. Why? Because when we are giving that liberty to children to access the internet, which is somewhere a right for them as well in the digital space, and when we’re talking about digital rights in India. But sadly, the tech companies, what we are seeing is that they’re failing in a form that we do not have some standard mechanisms to protect children. And these standard mechanisms are not well defined because we do not have a robust legal framework. We are borrowing some sections from various laws currently to protect children online, which was not even into speculation until we realized that children, how children are being targeted in the online spaces. Coming to grooming, it has taken the lives of many young children as well, because of these harms in the online space. Now coming to live streaming, because live streaming is one platform where pre-recorded videos are being monetized for their own benefit. And this is coming from, you know, cross-platform as well. The use of child content, child-abusive content, is something which is at its peak right now. And quoting some stats from the reports that have been released by NECMEC, and also the TIPLINE report that stated that 9 million children out of the 36.2 million population of India have been targeted of these. So this gives a very big number for us to even think about what is happening, especially when we talk about the cultural aspect of it. How we are looking into this content, from the live streaming perspective as well, that how we are able to add a lens of safety for them in the online space. Are we giving them adequate mechanisms that, you know, they can freely access internet? Or that freedom is turning into a bad shape for them? So these are some points that, you know, I want to think about. And also think about that, do we need to redefine these policies? Do we need to rethink about them? Do we need to reimagine that what is happening? Do we need to reframe working? Do we need to have more collaborative efforts where civil society organizations, where tech companies, where regulatories can come together and create a full package where we are just thinking about finer safety over there? At this point, at this juncture, when we are talking about the legal frameworks, I think so many of you must have also heard about it. heard about the Digital Personal Data Protection Act, that is talking very specific about child safety in India. There was a parallel legal framework called the Digital India Act, which was only to take forward the Information Technology Act, where India is trying to define all sorts of online harm, protecting digital users. So these two important acts, we are also hopeful that that will reshape the whole cultural context of child safety in India, including live streaming spaces that will prevent children from feedback and situations that children are encountering in the online spaces at the given time.
Sabrina Vorbau: Thank you very much, Patricia, for your intervention and also bringing us closer a bit to the national context and the current situation in India and also the, yeah, very shocking numbers you have shared with us, but I think, and we will come back to this also in our second round where we will talk a bit more about cross-platform and multi-stakeholder collaboration, but reporting is, of course, crucial to make that more accessible to children and young people, but the public in general, build more awareness but also more confidence in young people to take the step to reporting. Before we come to our last speaker for this first round, I just remind everyone, if you would like to make an intervention or ask a question here in the room, please line up towards the microphone and also online, there’s the possibility to intervene. You can just raise your hand. We will now conclude this first round, giving the floor to Sean, Sean Lytton, President from the Tech Coalition. Sean, I hope you’re with us and the floor is yours now. Yeah, thank you. It’s great to be with you all.
Sean Litton: By way of introduction, the Tech Coalition is a global association of leading tech companies. We have social media, search engines. We have many live streaming platforms. Some, that’s their primary business. And then many platforms, it’s a component of their service. They’ll have a live streaming component. And we are 100% focused on preventing, disrupting child sexual exploitation and abuse and building our members’ capacity to do this. As we heard from the other speakers, there’s alarming trends with respect to online child sexual exploitation and abuse in a live streaming context. It’s used to, it is perhaps the number one way that children are now exploited commercially in terms of, we would call it sex trafficking. These trends underscore the need to develop and deploy technologies that prevent child sexual exploitation and abuse on live streaming platforms. While also ensuring children’s safety, well-being, and privacy. The Tech Coalition and our members are prioritizing ways to do this. Two examples I’d like to talk about today. First of all, we are working with one of our members. It’s a major live streaming platform to develop a tool to detect child sexual exploitation and abuse in a live streaming context. It uses session metadata and third-party signals to generate a risk score for the particular broadcast. And because this tool operates without analyzing actual content of the live stream, privacy standards are preserved. But based on that score, the child safety team can take a closer look and decide whether to shut down or intervene in some way in that broadcast. So in practice, this approach relies on participants. participant characteristics, like country of origin, the use of anonymization services, et cetera. And by combining these metadata signals, the system indicates the likelihood of child sexual exploitation and abuse activity occurring within a given live stream session for further investigation. Now, development of this tool began in the fall of 2024, and this summer, our members will test and formally evaluate its feasibility for this approach for broader industry adoption. So our goal is to advance detection methods that use behavioral signals and metadata rather than relying solely on content scanning and preserve privacy of the conversation. So another example is Apple’s communication safety feature. For example, Apple’s a member of the tech coalition. This uses on-device machine learning to detect nudity in photos and video accounts, and because analysis happens on the device, neither Apple or any other third party observes the content or is aware that nudity was shared. When nudity is detected, the image is blurred, and the child sees age-appropriate safety information and health resources. And recently, Apple announced the expansion of this feature to FaceTime calls. It is also available by API for free to all developers who develop apps for iOS. This enables any app to check for nudity in a video stream, detecting the sensitive content either from the device’s camera or the remote devices signed into the conference call. These are just two examples of how child sexual exploitation abuse detection can combine both children’s safety and privacy to reduce abuse in live streaming environments.
Sabrina Vorbau: Thank you very much. Thank you very much, Sean. Also again, for giving us the perspective and the work that is going on in the space of tech companies, certainly a lot. We will come back. back to Sean also in the next round where we then also talk a little bit more about multi-stakeholder collaboration and also Sean mentioned already the promotion of well-being for children and young people when using technology. Before we get there I see we have a few people lining up. Just please before you intervene or ask your question quickly introduce yourself.
Audience: Please go ahead. Hello, good morning. I’m Julien Ruffy, I’m a associate professor at the University of Paris 8 and I co-chair the working group on internet governance and I’ve taken part in a few research projects on online use safety. One of the things that come up very frequently when we do focus groups in middle and high schools with young people is that a lot of the time they feel that partly they can be compelled by adults and institutions to use digital tools. Of course I’m not talking about live streaming but just having to use a mobile phone and being online a lot even when sometimes they might wish to disconnect and when they are faced with this kind of content or with cyber bullying problems or any of the things that you have talked about the main problem that comes up is where do I find a safe space where adults can hear my problems and act on them and I feel like a lot of the proposals coming from the tech company side is to say well we’re going to develop a technology that is going to detect nudity and things like that without infringing on privacy which is something that I would like maybe to question sometimes but actually what happens then if some content is flagged or something really bad happens what happens if then police for example doesn’t act upon it and so my question to you is in your work advocating for children’s rights in the online sphere how much effort are you targeting towards law enforcement agencies and educational institutions so that when something happens they act upon it because you can flag content as much as you like. Platforms can act as much as they’d like. If at some point there is no law enforcement coming up, problems are not going to be solved. Thank you. Thank you so much for your question. We haven’t
Sabrina Vorbau: heard from Robert yet but maybe this could already be a question for you to briefly answer but I know that you later on will give a bit more context on the work that Off-Limits are doing. Yes, thank you very much. Good morning everyone and thank you for that question. I would like to bring in the perspective of the
Robbert Hoving: safer internet centers, the system we have in Europe, because that actually is more a safe space both for parents, caretakers but also for people who are a victim. To find help and a lot of times that help can be that they just have someone that listens to them. They can also take the content down and then if they want to take more action, for instance go to the police, they can help with that as well but they can also provide them towards the right helping parties. So for me it would be really crucial to strengthen such a network because there are these spaces where people can go and a third aspect of them is the awareness raising to ensure to help schools and to give schools material to ensure that it doesn’t happen. You will always have people with bad intent so you cannot stop everything but I think you could step up much more in the prevention looking at the data that is actually coming in at the safer internet centers. And I think we will hear a little bit more about this later
Sabrina Vorbau: on in the session. Maybe for this round one more question or intervention and
Jutta Croll: Jutta please. Thank you for giving me the floor Sabrina. I am Jutta Kroll from the Dynamic Coalition on Children’s Rights in the Digital Environment and since I would like to say a few words about the Global Age Assurance Summit, which is part of the Global Age Assurance Summit, and Age Assurance was already mentioned as a tool to protect children from online sexual exploitation. I just wanted to intervene and to turn your attention to the Global Age Assurance Summit, Standard Summit communique on Age Assurance, which is a child rights-based approach to such issues, and I would like to say a few words about that. It’s possible with Age Assurance instruments to make sure that adults cannot join these spaces that are made for children, and the Global Age Assurance Standard Summit came to the conclusion that this can be done, data minimizing and privacy preserving, without being intrusive and gathering data from children, as well as providing a safe space for children to be able to join the community. So that’s what we are doing, and we are also working on the ISO standard 27566, so if anybody is interested how that can be done, you will find the communique. We have brought some copies and just come to the Dynamic Coalitions, who is in the IGF village, and we can talk about that. Thank you.
Sabrina Vorbau: Thank you very much, Sabrina, and thank you to all of you for being here. I think we have covered a lot of topics, and the great work you and colleagues are doing on children’s rights. I think we will move to the next round, but please stay with us. We will bring you in for intervention later.
Dhanaraj Thakur: I will hand over now to Danaraj to guide us through the second round, more looking into cross-platform and multi-stakeholder collaborations. Danaraj, I will hand over to you. Thank you, Sabrina, and thank you to all of you for being here. of content related to child sexual exploitation and abuse. This is a significant problem and part of the reality of how this kind of exploitation and abuse occurs. So, the question that we want to raise, to start with our experts, and then we can move into Q&A at the end of the round as well is how can we promote, or better promote, cross-platform efforts, including on livestream platforms, to prevent the spread of this kind of child sexual exploitation and abuse content. And to do that, I’m gonna start with you, Robert, to hear your thoughts first on this issue,
Robbert Hoving: on this cross-platform problem. All right, thank you very much. As we’re from the Safer Internet Center here in the Netherlands of Limits, we have the hotline for the child abuse material to be reported, we have the helpline for the other transgressive behavior, and also where caretakers can call for help. And we also have an initiative which is called Stop It Now, which is a prevention line for people actually watching this material. And when I bring that perspective in, when I start, for instance, with the perpetrator, we did research and we discovered that the people calling Stop It Now, more than half are males under 26, and they watch this type of behavior out of escalation behavior. For instance, because at a too young age they went online, and we know how easy it is to get access to an adult website, and we also know that looking at heavy material at a young age desensitizes and can ensure that people want to look for heavy material. Abuse material might be heavy material. When we look at the victims, for instance, at the numbers of the hotline at Off Limits, of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves, might be alone, might be with other children, might be with attributes, but they make them themselves. themselves through webcams, phones, et cetera. And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children. And then we look at platforms. And there are different types of platforms. There might be live streaming platforms or also social media. There are a lot of good aspects about social media. And some social media are really veered towards sharing content. But there’s also social media that is designed to meet people and to connect with people. Now, when we go back to that 75% prepubescent, we know from, and we’re going to investigate this now together with the Ministry of Justice, but we know that it might be grooming, for instance, in online spaces. But it might also be risk behavior, looking for attention because it comes from previous abuse or it’s because it’s stuff they have seen online at a too young age, and they’re enacting that. Now, when you have these three ingredients, Ms. Cole already mentioned age verification. I think it’s very important to, at certain online spaces, have age verification to ensure that old people who want to post as younger people and young people who want to post as older people, that they cannot connect. And also to chime in and to actually echo what Janice said at the session about deep fakes, cross-platform efforts, connecting with education. I think we could also really have companies stepping up to collaborate more with schools to give material instead of just pushing their tools towards schools, but also help build up the curriculum. Because we always say education, but when you put everything at schools, then there should also be the capacity at schools to be able to do that. And I think combining it with the tech sector, for instance, like Janice mentioned, could actually be a very good idea. So that would be my reaction. to your question. Great.
Dhanaraj Thakur: Thank you. Thank you for those points. Particularly the statistics you mentioned around prepubescent self-generated content. So Sean, I want to turn to you now, because given your particular perspective, your perspective with the coalition and therefore the opportunity to work across actual platforms
Sean Litton: and engage in efforts across platforms, I’d be curious to hear your thoughts on this issue of solutions, cross-platform solutions. Yeah, thank you for the question. So as the other speakers have noted, bad actors typically exploit multiple services across the tech ecosystem in their attempts to groom children, distribute CSAM, or engage in other harmful activities like financial extortion. So for example, the bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms. They don’t have all the information. And it makes them difficult, without the complete picture, to adequately grasp what’s going on and take action. So that’s why industry collaboration is essential at the Tech Coalition. We recently launched a program called Lantern, which is the first cross-platform signal sharing program that helps companies strengthen enforcement of their child safety policies. We launched Lantern so companies could securely share signals with one another about accounts and activity that violate their own child sexual exploitation and abuse policies. Until then, until Lantern, there was no consistent way for companies to share this information in a secure. and privacy-preserving way, Lantern helps fill that gap by revealing a fuller picture of the harm. Working with Lantern, companies can increase their prevention and detection capabilities, speed up threat identification, build awareness of emerging threats, bad actor tactics, and strengthen their reporting out to hotlines and other authorities. So we know this approach works. And last year, members shared hundreds of thousands of signals through Lantern. This led to account actions, content removal, and the disruption of offender networks and CSAM circulation. Signals helped flag contact and trafficking cases as well that may not have been identified otherwise. It’s really important. And crucially, these outcomes come in addition to the original action taken by the company that first detected the abuse, showing how Lantern enables a ripple effect of protection across the ecosystem. So together, initiatives like Lantern are helping close detection gaps, enabling faster action and improving that collaboration really does make a difference. It’s not just possible. It’s powerful.
Dhanaraj Thakur: So thank you. Great. Thank you, Sean. Very interesting to hear about Project Lantern. And I’m sure that might come up again in the Q&A. Very good. Also next, I want to now turn to one of our online speakers, Sabine Witting, based at Leiden University. Sabine, since this is, and hopefully you can hear us, and as well, since you’re joining us now and speaking for the first time, please, if you want to say more about yourself, please go ahead. Thank you. Thank you so much. And I hope you can hear me all right.
Sabine Witting: Yes. My name is Sabine Witting. I’m an assistant professor for Law and Digital Technologies at Leiden University, but I’m also the co-founder of TechLegality, which is a consultancy firm specializing in human rights and tech, especially on a lot of these hot potato topics that already came up today, such as age assurance. We do a lot of work on these topics across the world. So thanks so much for the question around cross-platform collaboration. And I think this kind of collaboration is essential because platforms all deal with the same human rights and children’s rights issues, especially issues around competing rights. And a lot of this really is essentially trying to square the circle, and this kind of collaboration can certainly assist with that. I think also that cross-platform collaboration alone, of course, is not enough. As important as it is that industry has their own space, for example, for the tech coalition to really collaborate, I think a multi-stakeholder effort is always crucial to have all people affected by technologies at the table, human rights advocates, child rights advocates, academia, but also parents and children themselves. And I also want to use an example of one of the technologies that’s often put forward as a solution to child sexual abuse in the digital space, but also especially for live streaming, which is age assurance, which has already come up a few times. And I think when we approach age assurance, for example, from a multi-stakeholder collaboration, one of the key gaps that’s always, always criticized about age assurance is the lack of clear evaluation criteria and how we can assess, for example, the effectiveness and the robustness of these technologies, and also to really understand better who is adversely impacted by these technologies. And at the moment, there are some standards, there’s some industry standards, which have played a very important role for quite a long time. But the problem is that a lot of the industry standards are not accessible and they’re not drafted in a multi-stakeholder way. They’re drafted by industry, they’re often drafted by age assurance providers themselves. And that, of course, then begs the question of might these standards be biased to a certain extent? Have human rights concerns from across the world really been taken into consideration? And here I’m not only talking about the Global North, but especially the Global South. And representatives of the Global South are usually underrepresented in these industry standard making processes, which is a huge problem because of the important role that industry standards are playing here. And maybe I want to point at a good practice example that I’m lucky enough to be part of, which is the current drafting of the IEEE standard on the prevention of CSEA and generative AI. And this group is really a combination of three groups. of all the actors that we need around the table. We have human rights advocates, we have industry, we have tech experts with academia. And it’s really enriching to see how an industry standard can be developed with all of these different stakeholders at the table. And also we have a strong representation from the global South. And of course, also generated AI, we’ve heard it from Abel and from Pratisha, is a different story in India and in African countries that is in the global North. So it’s really important to have not only a cross-platform collaboration, but multi-stakeholder and regional representation, especially of vulnerable groups.
Dhanaraj Thakur: Thank you. Great, thank you so much, Sabine. And thank you for raising this point about the relevance of standard settings, particularly around age insurance tech, and the utility of having multi-stakeholder projects with that. So I now want to turn to Kate Rowan, my colleague at the Center for Democracy and Technology. So Kate, we heard about, from Sean, about the industry efforts around cross-calibration, but we also heard both from Robert and Sabine, the importance of the multi-stakeholders in these efforts. So I’m curious on your thoughts on how we can better address the cross-platform proliferation of these kinds of content. Thanks so much, Jan-Raj. And thanks to everybody who has spoken so far.
Kate Ruane: I echo a lot of what’s already been said, and I specifically want to pick up on the point that cross-platform efforts need multi-stakeholder engagement in order to work best, in part because cross-platform efforts are going to create even more significant risks for human rights like free expression and privacy. And that is actually a necessary thing. Child sexual abuse and exploitation is such a large and clear harm. It is a crime around the world for a reason. And that means that efforts to restrict it are… necessarily proportionate to the harm. And so therefore, when mistakes get made, we can see really significant impacts on the lives of innocent people. And in order to ensure that our responses to the crime of CSEA is proportionate, multi-stakeholder engagement can ensure that harms do not propound beyond the criminal activity. And multi-stakeholder engagement can be really helpful for ensuring that there are things like transparency and appeals processes. The Tech Coalition, for example, has done human rights impact assessments on its efforts to combat CSEA. That’s really positive, and it would be really helpful to have more spaces in which children themselves, survivors, civil society, technologists, especially privacy experts, and others can engage in the ways that platforms are developing their information sharing efforts to ensure that human rights are respected throughout their development and their execution. And I specifically want to call out a couple of things. So the Tech Coalition has talked about its development of a tool to detect signals across platforms for child sexual abuse and exploitation. This is a really interesting and valuable tool, I think, and it would be helpful, particularly, to have transparency into the tools themselves, not just for signals development, but also for things like content detection, especially in live stream where it’s particularly difficult to execute. Most content detection tools are designed to identify content at rest, whereas live streaming content is constantly in motion. The various tools that exist to try to identify child sexual abuse and exploitation within live streaming are currently significantly lacking in benchmarks, yet we have. a number of organizations that are marketing technologies that claim to be able to identify child sexual exploitation and abuse content in live streaming or in video content. But we don’t really have a good way to identify whether these products actually work or work sufficiently, or whether there are sufficient safeguards in order to address potential errors in the content detection tools. Multi-stakeholder engagement and transparency can help with both of those things. It can help us identify and improve these types of tools, and it can help us sort of understand how they work going forward, and help us deploy better transparency and accountability tools for tech companies themselves and for governments who are engaged in proper enforcement of their laws going forward. So those are a couple things that I thought about. And I also wanted to turn back to Robert’s point about media literacy and ensuring that we are investing in people’s understanding regarding how to engage in combating CSEA at the local and person-to-person and user level. I think we definitely need significantly more engagement in that front, and we also need tech companies both in a cross-platform and a transparency way to talk to each other about how their reporting processes work. One of the things that we see is that your enforcement efforts sometimes are only as good as your reporting processes are. If it is difficult to find the button, just the simple design feature of how to report harm and ensure that it is moving forward, that is another simple thing that platforms can share information about can share transparency reporting about to make sure that we are consistently getting better in how we combat these harms, while also ensuring that we are protecting free expression and privacy in the process. So, we’re going to continue the pattern, the format that Sabrina laid out where we can now have a break for questions and interventions from the audience and we’re going to move on to the Q&A portion of the session. So, I’m going to turn it over to you.
Deborah Vassallo: Thank you. Thank you, Kate. So, I’m going to turn it over to you. We’ll take questions and interventions from the audience and, again, I would ask for the same thing where people can line up by the mic. So, please line up and we’ll take as many as we can. We do have a few minutes. So, yes, sir, please start and if you can just give just say introduction as well before you share your question or intervention. Sure. Good morning, everyone.
Andrew Kempling: My name is Andrew Kempling. I run a tech and public policy consultancy. I’m also a trustee of the Internet Watch Foundation. Since we’re talking about CSAM, no one’s actually given a number. I think we need to have a number. It’s estimated there’s roughly 300 million victims of child sexual abuse and exploitation every year globally. That’s about 14% of the world’s children each year. So, just to put some scale on this, this is a non-trivial problem. To add to the list that Lisa started with back at the start, a couple of people have mentioned age estimation and verification. I just wanted to reiterate that because a few sessions earlier in the week people have asserted that’s simply a mechanism to let social media platforms get even more data about their users. But as has rightly been said, there are privacy-preserving mechanisms to do age verification and estimation, which are important tools to keep children off from adult content, but importantly also adults off of child sites. and from accessing child accounts on site, so we need to make better use of that. What hasn’t really been said is that research shows us that end-to-end messaging platforms, encrypted messaging platforms, are widely used to share child sexual abuse material once it’s been captured, including video, and privacy is used as an excuse to not stop that, where people have weaponised privacy. I’d love to hear the panel’s comments. There are well-known privacy-preserving techniques to block known CSAM from being shared on these platforms. I’d love to hear your views on why they’re not being used. And then finally, I just wanted to also mention, since tech standards were covered, I think, by Sabrina, they are changing to the extent that a lot of existing parental controls and content filtering will stop working because metadata is increasingly being encrypted. That’s a major problem, not least of which because a lot of the policy community don’t take part in the way that tech standards are defined. So we need to find a better way for getting multi-stakeholder engagement, otherwise we see the problem getting bigger, not smaller, in the community. And then finally, Kate talked about those tools not being known whether they work or not. The IWF has data, we can test tools, so maybe let’s get together, perhaps with Sean’s members afterwards, and we can do some validation whether they actually are effective or not. Thank you.
Dhanaraj Thakur: Great, thank you. Thank you for the offer as well. So maybe we can have some quick reactions to this. First was the point about encryption. Kate, if you had any quick thoughts on that, and then the point about tech standards as well, maybe Lisa or Sabine, you might want to jump in there.
Kate Ruane: So, Kate, you wanna? Sure, I would love to start with the end-to-end encryption question. So, you know, I think that oftentimes privacy and safety are placed in tension with each other, and I find that framing to be a little bit difficult because I think privacy and safety are very much in line with one another. Folks often point to the distribution of child sexual abuse material through end-to-end encrypted platforms as a reason to create an encryption backdoor, or a justification for no longer using or relying on encrypted technologies. I think that that can obscure many of the benefits of encrypted technologies, and encrypted technologies are particularly salient for human rights defenders, for journalists, and for people who just want to keep their data private to both themselves and to the intended recipients, and keep it outside of the view, not just of governments and other bad actors, but also tech companies themselves. As tech companies continue to hoover up so much more data about all of us, that encrypted services are one of the few and potentially actually only place where they cannot see the content of the communication, and that becomes more and more important as we see the world changing in front of our eyes. But I want to point to a particular research that has been done by Rihanna Pfefferkorn at Stanford, where she looked at the effectiveness of content oblivious versus content aware methods of content moderation, and what she found is, for the most part, for the vast majority of harmful content, content oblivious methods. of detecting these types of abuse are far more effective than content scanning or content aware methods of detecting content. And so when you put that in front of the fact that CSAM is a very particular type of harm that is specifically probably best detected via things like content matching, putting it up against the value, putting the ability to detect it on every single surface across the entire internet, against the value of having a safe and effective place to engage in communications for national security purposes, for the purposes of journalism,
Dhanaraj Thakur: for the purposes of ensuring privacy from tech companies and from governments that would otherwise harm people for so many other reasons, I think that there are many other ways to detect CSAM on encrypted services, including through user reporting, which has been a very effective way to address that content, that we should continue to have encrypted services and continue to think about privacy and safety as things that are complementary to one another and not necessarily at odds. All right, thank you. Okay, for purposes of time and interest of time, I’m gonna ask the next two people in the line to maybe just very briefly offer your intervention or question and then I’ll bring it back to the panel and then we move to the next round. So please go ahead. And also Sir Amanda to introduce yourself.
Audience: Sure, Shiva Bisasa, Trinidad and Tobago. Speaking from the experience of knowing someone, knowing a victim of financial sextortion who ultimately took his life and what we’ve had to go through in the follow-up to that, in investigating, reporting, coming from a developed nation. I see a big gap between some of the things I’m hearing about on stages here and the reality that exists in the developing world. Definitely reporting needs to be improved, investigative methods need to be improved. If there’s the ability for victims to directly report to platforms, that would be good. Because we have material that we can put forward to the platforms and we need some outlet to give it to the authorities or competent authorities to deal with the matter. I think the first speaker, the first contributor, he talked about the law enforcement ability to do things. And I think that is something that needs to be developed further. Assistance with respect to development, developing capacity in law enforcement in developing states, as well as general awareness within the general population and some of the online harms that exist out there that needs developing in the developing world. Sean brought up the issue of financial extortion, he also brought up the issue of there’s an exit point, there’s a payment aspect to it. Direct question to Sean would be, does Landon’s project also contemplate signals from the financial transactions aspect, are you matching or are you searching within financial transactions to either find perpetrator payments, payments to perpetrators or payments from victims?
Sabrina Vorbau: Thank you. Thank you. And thank you also for sharing your experiences. The next, yes, please go ahead.
Sergio Tavares: Hi, my name is Sérgio Tavares, I’m from SaferNet Brasil, which is the safe internet center for Brazil. And I come from a country with almost 200 million internet users. One third, one in three, are children and teenagers below 18 years old. And also live streaming. streaming has also become prevalent in Brazil, and we are seeing a growing number of reports, not only in my country, but everywhere. If you look to the U.S., you can see the NICMAC numbers. If you look globally, you can see you have the INHOPE numbers. So the numbers of reports, the number of cases is growing everywhere in the world. On the other hand, the resources is declining. And my question is, how to solve this dilemma? Because we need resources to create technology. We need resources to create, to develop policies and prevention programs. But when you look for the government resources, they are declining. Privacy resources, industry resources, they are also, if it’s not declining, it’s stable on a very low level. And the problem is growing everywhere. How to solve this dilemma?
Dhanaraj Thakur: Thank you. Great. Thank you. Okay. So we’ll have some quick reactions from our speakers before moving to the next round. So Sean, there was a question directed to you. Maybe you have some thoughts you can briefly share with us.
Sean Litton: Yeah. Thank you. Yeah. There is a financial component to a lot of crimes against children online. With respect to Lantern specifically, we are piloting with two major global payment providers sharing signals on Lantern to determine the effectiveness of those signals. They are only ingesting. At this point, signals from the social media, gaming, et cetera, companies. And we’ll have a report out later this summer on the effectiveness of that pilot. If it’s effective, then we’ll scale it up and bring other financial companies onto the platform. So that’s, so you’re right. And I’m very sorry for what. happened to your friend, and there have been a number of cases of suicides related to financial extortion. It is a really difficult issue, and law enforcement is a big challenge there because the perpetrators of the abuse tend to thrive in countries where there’s lower law enforcement capacity, and then the victims may be in a different country. And so even if the report gets to that country where the origin of the crime or the abuse, law enforcement may not necessarily have the sufficient capacity to act on the report. And so this leaves everyone in a bind. But anyway, we are piloting with financial companies, and we hope to share those results later this summer. Thank you. Thank you. Okay, so we do have another round of questions. So I
Dhanaraj Thakur: want to actually, I know we have got a lot of important feedback from the audience, but I do have to turn it back to Sabrina so we can have another round, and then maybe we can save some time for the Q&A later on as well. Definitely, please stay in line. We will bring you in
Sabrina Vorbau: towards the end of the session, but in our final round we want to turn the attention to policy. It was brought up a couple of times already, the need for more standard mechanisms and also the more stronger role of law enforcement as CSAM threats evolve, also including the rise of self-generated content. We heard global numbers and regional numbers, and we will now look more into how should national and international policy frameworks respond. So for our final round, the question to the speakers is how can national and international policy be strengthened to address emerging forms of abuse such as the self-generated content? And Lisa is already on screen, so we will start this final round with you. Thanks so much, Sabrina, and I do of course have some thoughts in relation to the question. I just wanted to mention quickly two things relevant to the discussion we just had.
Lisa Robbins: Firstly, not my part of the OECD, but there has been some research done by part of the OECD called Financial Action Task Force, which has done some specific research in relation to disrupting financial flows relevant to live stream sexual abuse and sexual extortion. And so I’ll share that report with Sabrina so it can be shared with the group as well. And the paper that I just mentioned in relation to CSEA, we developed an intensive list of services that facilitate CSEA. And that really reflected what was said about cross platforming and off platforming from larger services to smaller services and a need for some scrutiny on the smaller services as well, which I think takes me into the policy question. So I just wanted to mention those issues. I think when we talk about national and international policies and how to strengthen them, certainly the position from the OECD and what we advocate is to take a tech neutral approach, a multi-layered approach, which can range from awareness raising with children, digital literacy through to industry action and then stronger sticks, such as industry regulation and law enforcement. That approach is a multi-stakeholder and that we engage really in good international collaboration so we don’t end up with a really fragmented regulatory space in this really global, global area. But I think what might be useful in relation to this conversation is to have a think about what are some sort of specific policy actions that maybe aren’t getting scrutiny the same way that a broader overarching policy actions are. A couple that have really been already touched on and really are getting attention are transparency reporting, practices and platforms in a number of areas. We’ve talked about transparency, not just of what’s happening on platform, but actually how companies are. are using tools and we’ve talked a lot about age assurance. I think there’s two other areas that would be interesting to focus on from a targeted policy response when we’re looking at safety of children from CSEA in a number of areas. So not just in self-generated content online coercion, but in all the different manifestations that we have relevant to CSEA. And I’m really pleased that the two that I was concerned about and wanted to raise have already been an important part of the discussion today. So the first is recidivism on platforms, and I think has been noted already is that law enforcement is overwhelmed. And I, again, as with my colleagues on the panel, express my sincere apologies and empathy with the gentleman who mentioned the terrible tragedy of his friend. Recidivism on platforms, I’m not necessarily talking about a law enforcement recidivism, but where bad actors are banned from an account, but then are able to recreate an account without consequences or without scrutiny. Now, Sean has already mentioned the Lantern Project, and I obviously will let him speak to that. But we do know that there is problems with recidivism. Research from Australia’s e-safety commissioner under its transparency reporting program shows that companies have really very little safeguards in the way of recidivism in varied practices. There’s limited information sharing across services. And what e-safety looked at both CCA and TVIC material and found that users and bad actors users were able to open an account, have it shut down for a violation and easily be able to open other ones with really little oversight as to how that is managed and oversight across platforms and within platforms to stop new accounts being created. And that is one area that could be focused on. The second is complaint mechanisms. And I’m really happy that that’s been mentioned a lot today as well. And just to focus on children themselves. and the capacity for children to make complaints. And I think Kate mentioned this, and also Ava mentioned this already today about really listening to kids and understanding complaints from a kid’s perspective. Again and again, research on children, when we talk to kids about what they want from platforms, they want better complaint mechanisms. They wanna be able to understand what it means actually to file a complaint, what filing a report means, what’s the consequences, and they want responses back to them to understand what actually happened with their complaint. And so I would, I’m happy to mention on other things and also to mention the work that the OECD has done on transparency reporting and age assurance and more broad legislation and international cooperation, but I would posit those two as two important areas where policy action could be focused.
Sabrina Vorbau: Thank you so much, Lisa, and you mentioned the multi-layered approach, which I think comes out very strongly of the discussion we’re having today and then also really crucial, the strong collaboration that is needed with law enforcement, and I want to bring in Robert T also based on the work that Off-Limits is doing and specifically the hotlines.
Robbert Hoving: Yeah, when I look at policies, for instance, I think a lot is there. In the Netherlands, a lot is there. On the European level, for instance, with the DSA now coming into effect, a lot is there. We have enforcement, we have safer internet centers, so we have policies how to deal with them. Sometimes I think that authorities could maybe go a bit quicker when a party is not working. For instance, Telegram, we had troubles with them in the Netherlands. You could also decide, take them out of the App Store because we know there’s a lot of CSAM, there’s weapons being sold, et cetera, and I think that that is a very good solution to directly go after these companies instead, for instance, looking at privacy rights, so to say. Going to the back, a bit for me the buzzwords of the IGF and this is my first IGF so maybe there are more buzzwords that I didn’t hear, but I heard a lot like multi-stakeholder approach, public-private cooperation, but I think that that is what we need more and I think also how we’re sitting in this room, what we’re doing at the IGF, it’s like gold and you need to brush it and it starts to shine because the dominant theme with online abuse and with new forms of online abuse is its content. So that to me means you should have an integral approach of how you deal with content online because another theme which will be dominant in new forms of abuse, it will start as harmful but be lawful. It can still have tremendous effects on people offline, people of the LGBTQ plus community, it can be very racist for instance with memes, but it is very sure that the dominant theme is content and it’s a lot of the times it’s lawful, it’s allowed to post these things while we know that these people who are a victim need help and I think by working that way in the public-private stakeholders, working together in a public-private collaboration, you can pick up signals together, you can do triage and from those signals you can see like hey, maybe we see a trend online that really as a society we don’t accept anymore and then you might try, then you might start to change your legislation because out of all those signals you decide like we want to change the law, it’s something we did for an instance in the Netherlands with doxxing, had to collect data to intimidate someone and also to spread that data with addresses etc. has been criminalized since the 1st of January 2024. So I think by picking up these signals, by working integral on the content that’s online, that is what we should do more. Like mentioned, I think we have a perfect example in the Netherlands, the ECP, that area in the room. I think the way we did that in the Netherlands, that multistakeholder public-private working together and actually building something and looking at, like, how should we approach this content, and be curious, be curious as a company, be curious as education, also be curious as civil society and, for instance, as ministries, that that is much more the way forward, because a lot of the policies, legislation, in my opinion, is already there. And I think the way we did that in the Netherlands, that multistakeholder public-private working together and actually building something and looking at, like, how should we approach this content, and be curious as education, also be curious as civil society and, for instance, as ministries, that that is much more the way forward, because a lot of the policies, legislation, in my opinion, is already there.
Sabrina Vorbau: Thank you. Thank you for sharing also that best practice example and if colleagues are interested, we are also here in the IGF village with the better internet for kids and the others, also to discuss on a more personal level if you are interested later on and to continue the conversation with us. Before we will conclude this round, I would like to bring in two more colleagues and also turn now to Kate for intervening with her points. Yes. Very quickly, because I would like to get to questions
Kate Ruane: as well. So I think on policy I can think of two specific things that I would like to see happen. First, I would like to see companies, and I have mentioned this already, but I would like to see more transparency from companies regarding how they are identifying and removing child sexual exploitation and abuse. The first is we don’t actually know the prevalence of this type of abuse on many platforms, because we do not have enough data from the platforms regarding how much they encounter. Second, it would be interesting to see how the effectiveness of the tools they are using. Right now, there are generally speaking, I think, three categories of tools being used to prevent, identify, and remove child sexual abuse material, and they are just kind of basically design-based features, so, for example, in order to live stream, you need to have a certain number of followers or an account that has existed for a certain amount of time, or you need to go through an age assurance process to assure that you are not of a particular age in order to begin to do that. into live stream, it would be good to know the degree to which those types of tools are reducing the number of live streams or reducing the amount of problematic content shared. It would also be good to know the data on which content detection tools are trained. At this time, content detection tools like PhotoDNA, we know that those are trained on known CSAM, but they are specifically designed to identify particular images at rest. If we are talking about content in live streaming, content that is moving, content that is live, it would be helpful to understand, A, the data sets that are being used to train these types of tools, B, how the data is being sourced, whether it is being done ethically, and then C, whether there has been consent to the use of the training data, especially if it is being trained on existing child sexual abuse material. Currently, we are not aware, or at least we’re not sufficiently aware, of how these tools are being created and trained, and yet they are being marketed as tools to use to detect content in live streaming. And then kind of the last thing we need to know is, how well are signals detection tools being, or how well are they working? Tools that, as the tech coalition talked about, tools like, OK, we’re looking at where your IP address is coming from. We’re looking at signals regarding the content that is associated with the live stream, for example, to try to figure out the likelihood that child sexual abuse is happening within specific content or within a live stream. Data regarding how successful those are and what mitigation efforts look like when content is misidentified as being CSAM, I think, would be helpful things to engage in from a policy perspective. And the last thing I wanted to talk about is law enforcement. In the United States, one of the most of the biggest problems we have is under-resourced law enforcement. So we are doing, actually, probably a relatively decent job of identifying CSAM, particularly when it is at rest, and reporting it into NCMEC. But what we don’t know, or what is under-resourced, is the ability to address that identified harm by law enforcement. And another tool that is going to be necessary going forward, and it’s going to get more and more necessary, to separate out synthetic CSAM, AI-generated CSAM, from real, live CSAM that has been created using an actual child. Because that is going to be essential to helping law enforcement identify children that are in harm so that they can engage in enforcement efforts in a more efficient way.
Sabrina Vorbau: Thank you. Thank you for your points, and also for concluding a bit on the transparency. Aspect, and yeah, how holding companies more accountable, as Robert mentioned already, in the EU. We have the Digital Services Act that came into force recently with a specific article on the protection of minors. And here, this is also a good example where, for example, our colleagues from the Safer Internet Centers are very active, also kind of transmitting this transparency in the education to the children, the young people, but also the parents and the schools to make policymaking in general also accessible. One more intervention from our speakers for this final round, and we have Sabine on the screen. Last points from your side. The floor is yours.
Sabine Witting: Thanks so much. I think if you allow me to approach a question maybe from the legal angle, I think there are various areas of laws that still require strengthening, both on the international level, but also national level. And I think there is a bit of a misconception that technology-facilitated child sexual abuse and exploitation is equally criminalized across the world. And we all have the same understanding what that means. And that’s certainly not the case. So much more work needs to be done in ensuring strong national legislation, especially in areas. such as live streaming, we have still a lot of countries that do not criminalize the mere accessing of child sexual abuse material as a separate criminal offense. And that’s because the main focus of international law, for example, the optional protocol to the CRC was always on possession because that was the prevalent issue in the 90s when these conventions were drafted. So there’s still quite a bit of a push where upcoming issues such as live streaming need to be addressed in national criminal law. The same issue is with self-generated content. Self-generated content is often considered a homogeneous group of content, which we just also need to criminalize. However, that is a much more complex issue from a children’s rights perspective because there is content that is produced voluntarily, consensually of adolescents above the age of consent to sexual activity. And these issues also need to be approached, again, with the same care from a children’s rights sense within the context that this content is produced. And I wanna go back to one of the points, I think that was mentioned a few times, which was a question around law enforcement. I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized. And this is for various reasons. First of all, because we still have a lack of protective measures for children in the criminal justice system. For example, protection from cross-examination. There is insufficient court preparation for children and the presiding magistrate, and also the prosecutor that interview and that examine children within the criminal justice system are not trained to do so in an age-appropriate and trauma-informed way. And I think this is a huge, huge gap. And I would really like to see an equal effort to re-strengthen that system. And the same applies, for example, and especially for victims of technology-facilitated child sexual abuse and exploitation. A lot of measures that are typically used in a child-friendly justice process, for example, the use of CCTV cameras, can be quite traumatizing for a child that is a victim of live stream of child sexual abuse and exploitation, because you put the child again in front of a camera in the criminal justice system, even though the camera has played a crucial role in the abuse and exploitation. exploitation. Much more consideration needs to be paid around also the use of technology in the criminal justice sector and how that might impact children that have experienced technology facilitated abuse and exploitation. My last point I want to make on the role of the financial sector because I think that also came up quite a few times times now especially again in the context of live streaming of child sexual abuse and exploitation where this is done in a commercial way. The financial sector can really play a very crucial role in flagging suspicious payments. I think mandatory reporting is really essential to make sure that financial institutions report suspicious transactions that might be linked to live stream of child sexual abuse and exploitation and one of the ways to do that is to consider these kinds of offenses predicate offenses under anti-money laundering laws and that really would help to oblige financial institutions to file suspicious transaction reports and as we know especially when it comes to organized crime and that includes a lot of cases of live stream of CSEA follow the money is one of the most important leads for law enforcement one of the most important starting points so I think also when we talk about industry responsibility let’s not leave at the financial sector especially in the context of commercial sexual exploitation and live stream. Thank you. Thank you very much
Sabrina Vorbau: Sabine also closing us up on a lot of action points and also very clearly mentioning that in all of this we should not forget the well-being of the child which is really needs to be and should be the center of the action. Thank you so much. For the last couple of minutes we would like again to give the audience here in the room and online possibilities for interventions and questions and please take the microphone and briefly introduce yourself.
Audience: So my name is Jameson Cruz. I live in Manaus, it’s one of the biggest Brazilian cities, and it is located in the middle of the Amazon forest. And I have proudly represented the youth delegation of the Brazilian Internet Young Governors Training Program since 2022. And during the Internet Forum in Brazil, the largest IGF event in the world, I have proposed a workshop focused on the protection of children and adolescents in the online environment. While addressing issues such as sexual exploitation is crucial, we must also emphasize the growing need to educate and communicate with people about the danger of overexposure online. This is a challenge that has intensified in recent years. Violence, discrimination, and sexualizing of minors have expanded beyond the physical and now permeate virtual space. The urgency to act is clear. We need platform regulations, public policies, education, and digital literacy to ensure that the internet is a safe space for all, especially for our youngest users. Thank you. Thank you. Thank you so much. And wonderful to have you here and many other youth from all over the world, especially the Global South to participate in the IGF. And I think programs like the Youth IGF are crucial and important. And I think as you rightly said, there is the need to put also more youth voices in the different sessions within the IGF and definitely also put more spotlight on children’s rights.
Sabrina Vorbau: Thank you. We’ll turn to our next speaker here in the room. Hi.
Audience: My name is Cosima and I work with the UK Government. So, I’m going to start off by saying that I’ve been a member of the digital literacy initiative for a long time. I’ve been a member of the NAC for Internet Center for about six years, working on digital literacy and also policy. I think digital literacy is an incredibly powerful tool. I would just like to mention or get your insight, there’s not lots of time, on how we can frame sort of literacy initiatives to be sensitive to the fact that if we want to implement them, we need to be able to do it in a way that people can understand what we’re talking about. So, I would just like to say that I’m a member of the NAC for Internet Center for a long time. I think it’s really important so they can understand it but we’re also not breaching a line. And then just quickly on policy, I really appreciated your insights on policy. I just wanted to quickly flag the idea that beyond the EU, several countries are either showing apprehension to sort of the use of digital literacy as a way to regulate the use of AI. So, I think it’s really important to see how technology facilitated gender-based violence and also more specifically CSAM, how we can sort of navigate that. Because if we’re outlawing regulation of AI, then that also includes regulation of AI in CSAM. So, yeah, I just wanted to flag that. Thank you.
Sabrina Vorbau: Thank you so much. Thank you for your participation as well. I’m going to turn it over to you. I’m going to turn it over to you. I’m going to turn it over to you.
Kate Ruane: Sure, I mean, I’m not aware of every single policy effort around the world right now, but the U.S. is currently considering a policy which would prevent U.S. states from regulating AI to some extent. We, our organization, the Center for Democracy and Technology has identified this particular provision as incredibly important, and I think it even links to the need of more as the generation of non-consensual intimate images or synthetic CSAM is certainly one problem that might exist. One thing to think about though is whether existing laws actually already without naming AI specifically already cover that particular type of abuse. Hopefully there are around the world laws that currently exist that while they might not say AI, nonetheless encompass the content that would be AI generated child sexual abuse material. Again, I don’t know enough, but from our perspective, preventing the regulation of AI whole cloth without considering the human rights impacts of that type of an action is deeply problematic.
Sabrina Vorbau: Thank you. We have two last interventions and questions in the room and then we will close with Danaraj and some takeaways from the session. So maybe we can take both questions and then respond to it. Please go ahead. Thanks.
Audience: My name is Raoul Plummer. I’m with the Electronic Frontier Finland. My first IGF was in Joopessoa. I’ve been to quite a few of these. And this is the first time I actually have to take a little space to commend something [This portion has been removed from the record for violating the IGF Code of Conduct; particulalry the stipulation to “focus discussion or remarks on issues rather than on particular actors, whether they be individuals, groups, organizations or governments, and refrain from personal or ad-hominem attacks]. And this is a very polarized issue as it is. It’s very tough and complicated. I totally appreciate saying that privacy and children’s rights can be completely aligned. But this kind of polarization [This portion has been removed from the record for violating the IGF Code of Conduct; particulalry the stipulation to “focus discussion or remarks on issues rather than on particular actors, whether they be individuals, groups, organizations or governments, and refrain from personal or ad-hominem attacks]. Thanks. Thank you. We will give the opportunity for follow-up after the session. Last speaker, please. I’m from the Finnish Green Party. My question is mostly relating to the issue that we have a lot of this… I mean, even this week we’ve heard a lot from these different actors, international actors, organizations, about the efforts to take down this type of content. But quite often the conversation ends up at washing their hands, that, yeah, we took down this content, we don’t have jurisdiction to go further. The investigation and other things need to be left up to law enforcement, which severely differs between different countries as well as legislation. And quite often the law enforcement does not have resources, neither in personnel or skills or understanding of current technologies, to actually do anything about it. And thus we get those repeat offenders and recidivism on those platforms as well as larger crime syndicates which operate through this or earn some part of their income through perpetuating this material or disseminating it. And it just seems still very confusing that this is such a long-lived issue, but there is no proper official body to unite these platforms and law enforcement. I hope to see some…
Sabrina Vorbau: I think we are all continuing, I will continue trying our best to collaborate in the way that we have been doing here today. I think it’s just taking a sort of glimpse of the conversation, opening up the conversation, and hopefully continuing the conversation in mutual respect of everyone’s opinion and the work we are doing. We are up for time, but I would like to give a final minute to Dan-Ada. I would like to give a final minute to Dan-Ada to kind of take us a little bit through some of the takeaways from our session.
Dhanaraj Thakur: Yes, thank you, Sabrina. We are at time, so I’ll be very brief just to say that we are having this very important conversation, but I think it’s obvious that we all recognize that this is a very serious problem that has significant harmful life-threatening impacts on children, their families, and communities. It’s important to highlight many of the, as the speakers did, as well as the audience members highlighted many of the important paths forward, particularly in terms of different kinds of recommendations, which included, for example, a point that was made at the very start, centering children in the design of solutions, in addressing the problem, starting from the design of technologies right through to the criminal justice system, centering their views as well as their well-being. Many participants, speakers mentioned the multistakeholder, the relevance and importance of multistakeholder approaches, but even more specifically talked about how that relates not just to, say, cross-platform approaches, public-private approaches, but also in standard setting and even coordination, improving coordination with law enforcement, for example. And we also talked about technical solutions and where these technical solutions, where we, those developing policy, require more information and transparency around these kinds of technologies. And so transparency is a theme that came up. several times as well, both in the efficacy of technologies, but also in trying to disaggregate, for example, emerging problems around synthetic CSAM and actual CSAM and so on. I think there is a lot that was discussed. And I know, given the line of participants that came up, we didn’t even get to every point. We will be sharing a summary report of this afterwards as well. What I would like to end on, of course, is just to thank all of our speakers. We really appreciate everyone who was able to join from many different places, here on stage with Robert and Kate, but also online with Patricia, Ayo, Sabine, and Sean as well. And then thanks, of course, to Sabrina and Deborah,
Sabrina Vorbau: our co-organizers on this. Thank you, everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you to all of the organizers. Thank you. Thank you all. Thank you for your participation. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Lisa Robbins
Speech speed
170 words per minute
Speech length
1767 words
Speech time
623 seconds
Safety by design approach with front-end prevention and back-end detection technologies is essential
Explanation
Lisa argues that protecting children from CSEA requires incorporating technologies at both the front end to prevent harm from occurring and at the backend to detect harm should it occur otherwise. This approach involves putting in place certain technologies into service design and delivery as part of a comprehensive safety by design framework.
Evidence
OECD published a report that posits eight key components for digital safety by design, with technologies falling into buckets of front-end prevention tools and backend detection tools
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Cybersecurity | Human rights
Age assurance is a key component but only two of 50 services systematically assure age on account creation
Explanation
Lisa identifies age assurance as a crucial component of safety by design because companies need to know who their users are to put in place child protective safeguards. However, research shows that very few services actually implement systematic age assurance, revealing significant gaps in current practices.
Evidence
OECD research of 50 services showed only two systematically assure age on account creation; paper on age-related policies was released the day of the session
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Human rights | Legal and regulatory
Companies rarely provide detailed information on moderation practices specific to live streaming
Explanation
Lisa’s research reveals that companies provide very little transparency about their moderation practices, particularly for live streaming content. This lack of detailed information makes it difficult for policymakers to understand where targeted policy action and deployment of safeguarding technology is needed.
Evidence
OECD report examining 80 services found only two provided information on moderation practices specific to live streaming, and only a few provide metrics breaking down live streaming incidents
Major discussion point
Transparency and Accountability in Platform Practices
Topics
Legal and regulatory | Human rights
Agreed with
Agreed on
Transparency from platforms about their practices is insufficient
Protecting children from CSEA is directly protecting their rights to freedom from violence and dignity
Explanation
Lisa emphasizes that protecting children from child sexual exploitation and abuse should not just be done in a way that protects their rights, but that this protection IS protecting their rights. She argues that CSEA directly infringes upon children’s dignity, privacy, and their ability to access other rights online.
Evidence
CSEA infringes upon dignity rights, privacy, and safe online spaces enable children to access rights such as opinion, assembly, information, education and health
Major discussion point
Children’s Rights and Participation in Solutions
Topics
Human rights
Better transparency is needed regarding how companies identify and remove CSEA content
Explanation
Lisa advocates for more transparency from companies about their content detection and removal processes. This includes understanding the prevalence of abuse on platforms, the effectiveness of tools being used, and how well different detection methods are working.
Evidence
Research shows companies have very little safeguards against recidivism and limited information sharing across services; Australia’s e-safety commissioner research on transparency reporting
Major discussion point
Transparency and Accountability in Platform Practices
Topics
Legal and regulatory | Human rights
Agreed with
Agreed on
Cross-platform collaboration is necessary due to the nature of online abuse
Disagreed with
Disagreed on
Focus on Platform Regulation vs. Law Enforcement
Children need better complaint mechanisms with clear understanding of consequences and responses
Explanation
Lisa highlights that research consistently shows children want better complaint mechanisms from platforms. They want to understand what filing a complaint means, what the consequences are, and they want responses back to understand what happened with their complaint.
Evidence
Research on children shows that when talking to kids about what they want from platforms, they consistently ask for better complaint mechanisms and clearer communication about the complaint process
Major discussion point
Children’s Rights and Participation in Solutions
Topics
Human rights | Legal and regulatory
Agreed with
Agreed on
Children’s voices and participation must be central to solution design
Aidam Amenyah
Speech speed
139 words per minute
Speech length
560 words
Speech time
241 seconds
AI-powered platforms should be sensitive to detect, prevent, and report live streaming situations affecting children
Explanation
Aidam argues that as AI technology becomes available, platforms should be designed to be sensitive enough to detect, prevent, and report live streaming situations that affect children. This technological solution should be developed with input from young people themselves to ensure effectiveness.
Evidence
Young people in Africa encounter harmful content virtually every day and specifically mentioned AI as a solution they want to see implemented
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Cybersecurity | Human rights
Laws are not effectively enforced and internet service providers lack accountability
Explanation
Aidam points out that young people in Africa observe that existing laws are not effectively enforced and appear not to be working in their region. He argues that internet service providers should have more responsibility and be held accountable for blocking harmful live streams involving children.
Evidence
Young people reported that laws are not biting and not working effectively in their country/region; ISPs are not blocking live streams of children adequately because there are no laws holding them accountable
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Legal and regulatory | Human rights
Agreed with
Agreed on
Law enforcement capacity and resources are inadequate globally
Children want involvement in designing protection solutions to ensure they are fit for purpose
Explanation
Aidam emphasizes that young people want to be directly involved in designing solutions that affect them, rather than having adults create designs without their input. They also want human oversight on AI moderation to prevent false positives and ensure solutions are intuitive for children to use.
Evidence
Young people recommended involvement of children in local communities to design solutions, human oversight on AI moderation, and emphasized that adult-designed solutions may not really impact children effectively
Major discussion point
Children’s Rights and Participation in Solutions
Topics
Human rights
Agreed with
Agreed on
Children’s voices and participation must be central to solution design
Africa faces uneven digital literacy and rapid technology growth exposing children to incidents
Explanation
Aidam describes the unique situation in Africa where there is uneven digital literacy combined with rapid technological growth. While technology helps to some extent, it also exposes children to numerous incidents, with young people reporting they encounter harmful content almost daily.
Evidence
Child Online Africa interacts with young people who report encountering harmful content virtually every day in online spaces
Major discussion point
Regional Perspectives and Challenges
Topics
Development | Human rights
Patricia Aurora
Speech speed
145 words per minute
Speech length
766 words
Speech time
316 seconds
Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively
Explanation
Patricia argues that tech platforms are failing to provide standard mechanisms to protect children, partly because there isn’t a robust legal framework in place. Currently, India borrows sections from various laws that weren’t originally designed to address online child protection, creating gaps in protection.
Evidence
India currently borrows sections from various laws to protect children online, which were not originally designed for online child protection; children facing abuse don’t know how to report situations
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Legal and regulatory | Human rights
Agreed with
Agreed on
Transparency from platforms about their practices is insufficient
Live streaming platforms are used for monetizing pre-recorded child abuse content
Explanation
Patricia highlights that live streaming has become a platform where pre-recorded videos containing child abuse are being monetized for profit. This cross-platform use of child-abusive content has reached peak levels and represents a significant threat to children’s safety.
Evidence
NECMEC and TIPLINE reports show 9 million children out of 36.2 million population in India have been targeted; pre-recorded videos are being monetized on live streaming platforms
Major discussion point
Financial Aspects and Commercial Exploitation
Topics
Cybersecurity | Human rights
India lacks robust legal frameworks and borrows sections from various laws for child protection
Explanation
Patricia explains that India doesn’t have comprehensive legal frameworks specifically designed for online child protection. Instead, the country relies on borrowing sections from various existing laws that weren’t originally intended to address online child safety issues, creating gaps in protection.
Evidence
Digital Personal Data Protection Act and Digital India Act are being developed to address these gaps and reshape child safety in India
Major discussion point
Regional Perspectives and Challenges
Topics
Legal and regulatory | Human rights
Cultural contexts must be considered when addressing child safety in different regions
Explanation
Patricia emphasizes the importance of considering cultural aspects when addressing child safety online, particularly in the context of live streaming. She argues that solutions need to account for how different cultures view and approach child safety in online spaces.
Evidence
Discussion of cultural lens needed for safety in online spaces and how freedom of internet access can turn problematic without adequate cultural considerations
Major discussion point
Regional Perspectives and Challenges
Topics
Sociocultural | Human rights
Sean Litton
Speech speed
136 words per minute
Speech length
1141 words
Speech time
503 seconds
Session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content
Explanation
Sean describes a tool being developed with a major live streaming platform that uses session metadata and third-party signals to generate risk scores for broadcasts. This approach preserves privacy by not analyzing the actual content of live streams while still enabling child safety teams to identify potentially harmful broadcasts.
Evidence
Tool uses participant characteristics like country of origin and use of anonymization services; development began fall 2024 with testing planned for summer 2025
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Cybersecurity | Human rights
On-device machine learning can detect nudity while preserving privacy through local processing
Explanation
Sean highlights Apple’s communication safety feature as an example of privacy-preserving technology that uses on-device machine learning to detect nudity. Because analysis happens on the device, neither Apple nor third parties can observe the content, and the feature has been expanded to FaceTime calls and made available to all iOS developers.
Evidence
Apple’s feature blurs detected images and shows age-appropriate safety information; recently expanded to FaceTime calls and available by API for free to all iOS developers
Major discussion point
Technology Tools and Safety by Design for CSAM Prevention
Topics
Cybersecurity | Human rights
Bad actors exploit multiple services across the tech ecosystem, requiring industry collaboration
Explanation
Sean explains that perpetrators typically use multiple platforms in their abuse activities – they might contact a child on a gaming platform, move them to private messaging, and then use live streaming platforms. Individual companies lack the complete picture of this cross-platform abuse, making collaboration essential.
Evidence
Example of abuse spanning social media, gaming, live streaming, and payment apps; individual companies are unaware of activities on other platforms
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Cybersecurity | Legal and regulatory
Agreed with
Agreed on
Cross-platform collaboration is necessary due to the nature of online abuse
Lantern program enables secure signal sharing between companies about policy-violating accounts and activities
Explanation
Sean describes Lantern as the first cross-platform signal sharing program that helps companies strengthen enforcement of their child safety policies. It allows companies to securely share signals about accounts and activities that violate child sexual exploitation and abuse policies, revealing a fuller picture of harm.
Evidence
Members shared hundreds of thousands of signals through Lantern last year, leading to account actions, content removal, and disruption of offender networks; helped flag contact and trafficking cases
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Cybersecurity | Legal and regulatory
Financial extortion is a major component of crimes against children online
Explanation
Sean acknowledges that there is a significant financial component to many crimes against children online, including cases that have led to suicides. He notes that perpetrators often operate from countries with lower law enforcement capacity while victims may be in different countries, creating enforcement challenges.
Evidence
Lantern is piloting with two major global payment providers; there have been cases of suicides related to financial extortion; perpetrators often operate from countries with lower law enforcement capacity
Major discussion point
Financial Aspects and Commercial Exploitation
Topics
Cybersecurity | Economic
Law enforcement capacity varies significantly between countries, creating enforcement gaps
Explanation
Sean points out that even when reports reach countries where crimes originate, law enforcement may not have sufficient capacity to act on the reports. This creates a challenging situation where everyone is left in a bind, as perpetrators can exploit jurisdictions with weaker enforcement capabilities.
Evidence
Perpetrators tend to operate in countries with lower law enforcement capacity while victims may be in different countries; law enforcement may lack sufficient capacity to act on reports
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Legal and regulatory | Development
Agreed with
Agreed on
Law enforcement capacity and resources are inadequate globally
Robbert Hoving
Speech speed
180 words per minute
Speech length
1383 words
Speech time
459 seconds
One-third of hotline reports involve self-generated content, with 75% being prepubescent children
Explanation
Robbert shares alarming statistics from Off-Limits hotline data showing that one-third of all reports involve self-generated content – meaning young children making sexualized images themselves. Most concerning is that 75% of these self-generated reports involve prepubescent children, not teenagers, indicating very young children are involved.
Evidence
Data from Off-Limits hotline shows 1/3 of reports are self-generated content; 75% of self-generated content involves prepubescent children; content made through webcams, phones, etc.
Major discussion point
Self-Generated Content and Emerging Threats
Topics
Human rights | Cybersecurity
Companies should collaborate more with schools on curriculum development rather than just pushing tools
Explanation
Robbert argues that instead of simply providing tools to schools, tech companies should help build up educational curriculum in collaboration with schools. He emphasizes that if everything is placed on schools for education, there should also be adequate capacity at schools to handle these responsibilities.
Evidence
Combining education efforts with the tech sector could be beneficial; schools need capacity building to handle educational responsibilities around online safety
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Sociocultural | Human rights
Agreed with
Agreed on
Multi-stakeholder collaboration is essential for effective child protection solutions
Safer Internet Centers provide safe spaces for victims and help with content removal
Explanation
Robbert explains that Safer Internet Centers serve as safe spaces for both parents and victims, providing someone to listen to them, helping with content removal, and assisting with further action like going to police. They also provide awareness raising to help schools prevent incidents from occurring.
Evidence
Off-Limits operates hotline for child abuse material, helpline for other transgressive behavior, and Stop It Now prevention line; they help with content takedown and connecting victims to appropriate help
Major discussion point
Education and Prevention Strategies
Topics
Human rights | Sociocultural
Grooming in online spaces and risk behavior from previous abuse contribute to self-generation
Explanation
Robbert explains that the high percentage of prepubescent self-generated content may result from grooming in online spaces, risk behavior seeking attention due to previous abuse, or children enacting behaviors they’ve seen online at too young an age. This highlights the complex factors behind self-generated content.
Evidence
Research with Ministry of Justice planned to investigate causes; perpetrators calling Stop It Now are more than half males under 26 who escalated from accessing adult websites at young age
Major discussion point
Self-Generated Content and Emerging Threats
Topics
Human rights | Cybersecurity
Prevention through awareness raising and school materials is crucial alongside enforcement
Explanation
Robbert emphasizes that while there will always be people with bad intent that cannot be completely stopped, much more can be done in prevention through awareness raising and providing schools with materials. He argues for strengthening prevention efforts based on data coming into Safer Internet Centers.
Evidence
Safer Internet Centers provide awareness raising and help schools with materials; data from centers shows prevention opportunities
Major discussion point
Education and Prevention Strategies
Topics
Sociocultural | Human rights
Kate Ruane
Speech speed
147 words per minute
Speech length
1863 words
Speech time
755 seconds
Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency
Explanation
Kate argues that while cross-platform efforts are necessary to address CSEA, they create even more significant risks for human rights like free expression and privacy. Multi-stakeholder engagement is essential to ensure that responses remain proportionate and that mistakes don’t disproportionately impact innocent people.
Evidence
Tech Coalition has done human rights impact assessments on its CSEA efforts; need for transparency and appeals processes in cross-platform information sharing
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Human rights | Legal and regulatory
Agreed with
Agreed on
Cross-platform collaboration is necessary due to the nature of online abuse
Companies need to share information about reporting process effectiveness and design features
Explanation
Kate emphasizes that enforcement efforts are only as good as reporting processes, and simple design features like how easy it is to find the report button matter significantly. She argues that platforms should share transparency reporting about their reporting processes to consistently improve how they combat harm while protecting rights.
Evidence
Simple design features like report button placement affect effectiveness; platforms should share information about how their reporting processes work
Major discussion point
Transparency and Accountability in Platform Practices
Topics
Human rights | Legal and regulatory
Agreed with
Agreed on
Transparency from platforms about their practices is insufficient
Disagreed with
Disagreed on
Focus on Platform Regulation vs. Law Enforcement
Privacy and safety should be viewed as complementary rather than opposing values
Explanation
Kate argues against framing privacy and safety as being in tension with each other, stating they are very much aligned. She emphasizes that encrypted technologies provide essential protection for human rights defenders, journalists, and people wanting to keep their data private from both governments and tech companies.
Evidence
Research by Rihanna Pfefferkorn at Stanford showed content-oblivious methods are more effective than content-aware methods for most harmful content; encrypted services are one of few places where tech companies cannot see communication content
Major discussion point
Privacy and Encryption Considerations
Topics
Human rights | Cybersecurity
Disagreed with
Disagreed on
Privacy vs. Safety in Encryption
End-to-end encryption provides essential protection for human rights defenders and journalists
Explanation
Kate defends encrypted technologies as particularly important for human rights defenders, journalists, and people who want privacy from governments and tech companies. She argues that as tech companies collect more data, encrypted services become one of the few places where they cannot see communication content.
Evidence
Encrypted technologies are salient for human rights defenders and journalists; tech companies continue to collect vast amounts of data, making encrypted services increasingly important
Major discussion point
Privacy and Encryption Considerations
Topics
Human rights | Cybersecurity
Under-resourced law enforcement struggles with identification and prosecution of perpetrators
Explanation
Kate identifies under-resourced law enforcement as one of the biggest problems in the United States. While the country does relatively well at identifying and reporting CSAM to NCMEC, the ability to address identified harm through law enforcement action is significantly under-resourced.
Evidence
U.S. does decent job identifying CSAM and reporting to NCMEC but law enforcement response to identified harm is under-resourced
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Legal and regulatory | Development
Agreed with
Agreed on
Law enforcement capacity and resources are inadequate globally
AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement
Explanation
Kate argues that it will become increasingly necessary to separate synthetic, AI-generated CSAM from real CSAM created using actual children. This separation is essential to help law enforcement identify children who are actually in harm and engage in enforcement efforts more efficiently.
Evidence
Tool needed to distinguish between synthetic and real CSAM to help law enforcement identify actual children in harm
Major discussion point
Self-Generated Content and Emerging Threats
Topics
Cybersecurity | Legal and regulatory
Sabine Witting
Speech speed
169 words per minute
Speech length
1377 words
Speech time
487 seconds
Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities
Explanation
Sabine argues that while cross-platform collaboration is important, it’s not sufficient alone. Multi-stakeholder efforts must include all people affected by technologies – human rights advocates, child rights advocates, academia, parents, and children themselves – to effectively address competing rights issues.
Evidence
Example of IEEE standard on prevention of CSEA in generative AI being developed with human rights advocates, industry, and tech experts with strong Global South representation
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Human rights | Legal and regulatory
Agreed with
Agreed on
Cross-platform collaboration is necessary due to the nature of online abuse
Global South representation is often underrepresented in industry standard-making processes
Explanation
Sabine points out that industry standards are often not accessible and are drafted by industry or age assurance providers themselves, potentially creating bias. Representatives from the Global South are usually underrepresented in these processes, which is problematic given the important role industry standards play.
Evidence
Industry standards are often drafted by age assurance providers themselves; Global South underrepresented in standard-making; generative AI issues differ between Global South and Global North
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Development | Legal and regulatory
Technology-facilitated child sexual abuse is not equally criminalized across the world
Explanation
Sabine explains that there’s a misconception that technology-facilitated child sexual abuse is equally criminalized globally with the same understanding. Many countries still don’t criminalize mere accessing of child sexual abuse material as a separate offense, and issues like live streaming need to be addressed in national criminal law.
Evidence
Many countries lack criminalization of accessing CSAM as separate offense; international law like optional protocol to CRC focused on possession due to 90s context; live streaming issues need addressing in national criminal law
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Legal and regulatory | Human rights
Criminal justice systems lack protective measures and trauma-informed approaches for child victims
Explanation
Sabine argues that governments should put equal effort into strengthening criminal justice frameworks as they do into platform regulation. She highlights that most cases either fall through cracks or leave children traumatized due to lack of protective measures, insufficient court preparation, and inadequately trained personnel.
Evidence
Lack of protection from cross-examination; insufficient court preparation; prosecutors and magistrates not trained in age-appropriate, trauma-informed approaches; CCTV cameras can be traumatizing for live stream abuse victims
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Legal and regulatory | Human rights
Agreed with
Agreed on
Law enforcement capacity and resources are inadequate globally
Disagreed with
Disagreed on
Focus on Platform Regulation vs. Law Enforcement
Self-generated content requires complex approach considering voluntary versus coercive production
Explanation
Sabine explains that self-generated content is often treated as a homogeneous group that should simply be criminalized, but it’s much more complex from a children’s rights perspective. Some content is produced voluntarily by adolescents above the age of consent, requiring careful consideration of the context in which content is produced.
Evidence
Content produced voluntarily, consensually by adolescents above age of consent requires different approach; context of content production must be considered from children’s rights perspective
Major discussion point
Self-Generated Content and Emerging Threats
Topics
Human rights | Legal and regulatory
Financial sector can play crucial role in flagging suspicious payments related to CSEA
Explanation
Sabine emphasizes that the financial sector can play a crucial role in flagging suspicious payments, especially in commercial live streaming child sexual abuse and exploitation. She argues for mandatory reporting and considering these offenses as predicate offenses under anti-money laundering laws.
Evidence
Financial institutions should report suspicious transactions; considering CSEA as predicate offenses under anti-money laundering laws would oblige suspicious transaction reports; ‘follow the money’ is important lead for law enforcement in organized crime
Major discussion point
Financial Aspects and Commercial Exploitation
Topics
Economic | Legal and regulatory
Andrew Kempling
Speech speed
135 words per minute
Speech length
410 words
Speech time
181 seconds
Age verification is important to prevent adults from accessing child accounts and content
Explanation
Andrew argues that age estimation and verification are important tools not just to keep children off adult content, but importantly to keep adults off child sites and from accessing child accounts. He emphasizes that there are privacy-preserving mechanisms available to accomplish this.
Evidence
Estimated 300 million victims of child sexual abuse globally each year (14% of world’s children); privacy-preserving age verification mechanisms exist
Major discussion point
Law Enforcement and Legal Framework Challenges
Topics
Human rights | Legal and regulatory
Disagreed with
Disagreed on
Privacy vs. Safety in Encryption
Jutta Croll
Speech speed
174 words per minute
Speech length
230 words
Speech time
79 seconds
Privacy-preserving age verification mechanisms exist without being intrusive
Explanation
Jutta explains that age assurance instruments can prevent adults from joining spaces made for children while being data minimizing and privacy preserving. The Global Age Assurance Standard Summit concluded this can be done without being intrusive or gathering excessive data from children.
Evidence
Global Age Assurance Standard Summit communique on child rights-based approach; working on ISO standard 27566; can provide safe spaces for children without intrusive data gathering
Major discussion point
Privacy and Encryption Considerations
Topics
Human rights | Legal and regulatory
Sergio Tavares
Speech speed
131 words per minute
Speech length
184 words
Speech time
84 seconds
Brazil faces growing numbers of reports while resources are declining globally
Explanation
Sergio highlights the dilemma facing countries like Brazil with almost 200 million internet users (one-third being children) where live streaming problems are growing and reports are increasing globally, but resources from governments, privacy sector, and industry are either declining or remaining at very low levels.
Evidence
Brazil has 200 million internet users with 1/3 being children under 18; NCMEC and INHOPE numbers show growing reports globally; government and industry resources are declining or stable at low levels
Major discussion point
Regional Perspectives and Challenges
Topics
Development | Human rights
Dhanaraj Thakur
Speech speed
162 words per minute
Speech length
1029 words
Speech time
380 seconds
Children should be centered in the design of solutions from technology to criminal justice systems
Explanation
Dhanaraj summarizes that centering children in the design of solutions is a key takeaway from the session, emphasizing that this should apply from the initial design of technologies all the way through to the criminal justice system. This includes centering both their views and their well-being throughout the process.
Evidence
Multiple speakers and audience members highlighted the importance of centering children’s views and well-being in solution design
Major discussion point
Children’s Rights and Participation in Solutions
Topics
Human rights
Agreed with
Agreed on
Children’s voices and participation must be central to solution design
Sabrina Vorbau
Speech speed
146 words per minute
Speech length
1884 words
Speech time
769 seconds
Youth participation in development processes and policymaking is crucial for effective solutions
Explanation
Sabrina emphasizes throughout the session that youth participation is crucial in the development process, policymaking, and design of solutions. She highlights the importance of making young people part of the entire process rather than just recipients of adult-designed solutions.
Evidence
Multiple references to youth participation throughout session; emphasis on including youth voices in IGF sessions and putting spotlight on children’s rights
Major discussion point
Children’s Rights and Participation in Solutions
Topics
Human rights
Agreed with
Agreed on
Children’s voices and participation must be central to solution design
Audience
Speech speed
148 words per minute
Speech length
1472 words
Speech time
595 seconds
Developing nations need capacity building in law enforcement and general population awareness
Explanation
An audience member from Trinidad and Tobago shared experiences of dealing with a financial sextortion case that led to suicide, highlighting the gap between discussions at international forums and reality in developing nations. They emphasized the need for improved reporting, investigative methods, and capacity building in law enforcement.
Evidence
Personal experience with financial sextortion case leading to suicide; challenges in reporting and investigation in developing nations; need for direct victim reporting to platforms
Major discussion point
Regional Perspectives and Challenges
Topics
Development | Legal and regulatory
Agreed with
Agreed on
Law enforcement capacity and resources are inadequate globally
Digital literacy and media literacy are powerful tools requiring sensitive implementation
Explanation
An audience member emphasized that digital literacy is incredibly powerful but needs to be framed sensitively so people can understand the concepts without breaching appropriate boundaries. They also raised concerns about policies that might outlaw AI regulation, which could impact CSAM regulation.
Evidence
Need for sensitive framing of literacy initiatives; concerns about policies preventing AI regulation that could affect CSAM regulation
Major discussion point
Education and Prevention Strategies
Topics
Sociocultural | Legal and regulatory
Overexposure online and digital literacy education are growing needs
Explanation
A youth representative from Brazil emphasized the growing need to educate people about the dangers of overexposure online, noting that violence, discrimination, and sexualization of minors have expanded from physical spaces into virtual spaces, requiring urgent action through regulation, policies, education, and digital literacy.
Evidence
Violence and discrimination have expanded from physical to virtual spaces; need for platform regulations, public policies, education, and digital literacy to ensure safe internet for youngest users
Major discussion point
Education and Prevention Strategies
Topics
Sociocultural | Human rights
Deborah Vassallo
Speech speed
168 words per minute
Speech length
84 words
Speech time
30 seconds
Online moderation support is essential for managing virtual participation in child safety discussions
Explanation
Deborah serves as the coordinator of the Safer Internet Center in Malta and provides online moderation support for the workshop session. Her role demonstrates the importance of having dedicated online safety expertise to facilitate discussions about child protection in digital spaces.
Evidence
Introduced as coordinator of Safer Internet Center in Malta supporting online moderation for the session
Major discussion point
Cross-Platform Collaboration and Multi-Stakeholder Approaches
Topics
Human rights | Sociocultural
Agreements
Agreement points
Multi-stakeholder collaboration is essential for effective child protection solutions
Better and more granular transparency and information is really key for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed
Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities
Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency
Companies should collaborate more with schools on curriculum development rather than just pushing tools
Children should be centered in the design of solutions from technology to criminal justice systems
All speakers agreed that addressing CSAM requires comprehensive multi-stakeholder approaches involving governments, tech companies, civil society, academia, and importantly, children themselves. They emphasized that no single entity can solve this problem alone.
Human rights | Legal and regulatory
Children’s voices and participation must be central to solution design
Children want involvement in designing protection solutions to ensure they are fit for purpose
Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively
Children need better complaint mechanisms with clear understanding of consequences and responses
Children should be centered in the design of solutions from technology to criminal justice systems
Youth participation in development processes and policymaking is crucial for effective solutions
There was strong consensus that children and young people should not be passive recipients of adult-designed solutions but active participants in designing, implementing, and evaluating child protection measures.
Human rights
Cross-platform collaboration is necessary due to the nature of online abuse
Bad actors exploit multiple services across the tech ecosystem, requiring industry collaboration
Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency
Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities
Better transparency is needed regarding how companies identify and remove CSEA content
Speakers agreed that because perpetrators use multiple platforms in their abuse activities, individual companies cannot address the problem in isolation and must collaborate while maintaining appropriate safeguards.
Cybersecurity | Legal and regulatory
Law enforcement capacity and resources are inadequate globally
Law enforcement capacity varies significantly between countries, creating enforcement gaps
Under-resourced law enforcement struggles with identification and prosecution of perpetrators
Laws are not effectively enforced and internet service providers lack accountability
Criminal justice systems lack protective measures and trauma-informed approaches for child victims
Developing nations need capacity building in law enforcement and general population awareness
There was unanimous agreement that law enforcement globally lacks sufficient resources, training, and capacity to effectively address online child sexual exploitation, creating significant gaps in protection.
Legal and regulatory | Development
Transparency from platforms about their practices is insufficient
Companies rarely provide detailed information on moderation practices specific to live streaming
Companies need to share information about reporting process effectiveness and design features
Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively
Speakers agreed that technology platforms provide inadequate transparency about their content moderation practices, detection tools, and reporting mechanisms, making it difficult to assess effectiveness and improve policies.
Legal and regulatory | Human rights
Similar viewpoints
Both speakers emphasized that age assurance is crucial for child protection and that privacy-preserving methods exist, though implementation remains limited across platforms.
Age assurance is a key component but only two of 50 services systematically assure age on account creation
Privacy-preserving age verification mechanisms exist without being intrusive
Human rights | Legal and regulatory
Both speakers recognized the significant financial component of online child exploitation and the important role financial institutions can play in detection and prevention.
Financial extortion is a major component of crimes against children online
Financial sector can play crucial role in flagging suspicious payments related to CSEA
Economic | Legal and regulatory
Both speakers highlighted the complexity of emerging technologies in child exploitation and the need for more sophisticated legal and technical responses to address these evolving threats.
AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement
Technology-facilitated child sexual abuse is not equally criminalized across the world
Cybersecurity | Legal and regulatory
Both emphasized the critical importance of education and prevention strategies, particularly in school settings, as essential components of comprehensive child protection approaches.
Prevention through awareness raising and school materials is crucial alongside enforcement
Digital literacy and media literacy are powerful tools requiring sensitive implementation
Sociocultural | Human rights
Unexpected consensus
Privacy and safety as complementary rather than opposing values
Privacy and safety should be viewed as complementary rather than opposing values
Session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content
Despite coming from different perspectives (civil society advocacy vs. tech industry), both speakers agreed that privacy and child safety don’t have to be in tension, and that technical solutions can preserve privacy while enhancing protection. This consensus was unexpected given the typical framing of privacy vs. safety debates.
Human rights | Cybersecurity
Self-generated content requires nuanced approaches beyond simple criminalization
One-third of hotline reports involve self-generated content, with 75% being prepubescent children
Self-generated content requires complex approach considering voluntary versus coercive production
Both speakers, from different professional backgrounds (hotline operations vs. legal academia), agreed that self-generated content cannot be addressed through simple criminalization and requires understanding the complex circumstances behind its creation. This nuanced view was unexpected in a field often characterized by zero-tolerance approaches.
Human rights | Legal and regulatory
Global South perspectives are systematically underrepresented in standard-setting
Global South representation is often underrepresented in industry standard-making processes
Africa faces uneven digital literacy and rapid technology growth exposing children to incidents
Cultural contexts must be considered when addressing child safety in different regions
There was unexpected consensus across speakers from different regions and roles about the systematic exclusion of Global South perspectives in developing child protection standards and solutions, highlighting a significant gap in current approaches.
Development | Legal and regulatory
Overall assessment
Summary
The discussion revealed remarkably high levels of consensus across diverse stakeholders on fundamental principles: the need for multi-stakeholder collaboration, centering children’s voices, addressing cross-platform abuse, improving law enforcement capacity, and increasing platform transparency. There was also unexpected agreement on nuanced issues like privacy-safety compatibility and the complexity of self-generated content.
Consensus level
High consensus with significant implications – the broad agreement across civil society, academia, industry, and government representatives suggests strong foundation for coordinated action. However, the consensus also highlighted significant implementation gaps, particularly in law enforcement capacity, Global South representation, and platform accountability. This suggests that while there is agreement on what needs to be done, substantial work remains in building the resources and mechanisms to achieve these shared goals.
Differences
Different viewpoints
Privacy vs. Safety in Encryption
Privacy and safety should be viewed as complementary rather than opposing values
Age verification is important to prevent adults from accessing child accounts and content
Kate argues that privacy and safety are aligned and defends end-to-end encryption as essential for human rights defenders and journalists, while Andrew suggests that privacy is being weaponized as an excuse not to stop CSAM sharing on encrypted platforms and advocates for privacy-preserving techniques to block known CSAM.
Human rights | Cybersecurity
Focus on Platform Regulation vs. Law Enforcement
Criminal justice systems lack protective measures and trauma-informed approaches for child victims
Companies need to share information about reporting process effectiveness and design features
Better transparency is needed regarding how companies identify and remove CSEA content
Sabine argues for equal focus on strengthening criminal justice frameworks as platform regulation, emphasizing that governments over-focus on platform responsibility while neglecting law enforcement capacity. Kate and Lisa focus more on improving platform transparency and accountability mechanisms.
Legal and regulatory | Human rights
Unexpected differences
Self-Generated Content Criminalization Approach
One-third of hotline reports involve self-generated content, with 75% being prepubescent children
Self-generated content requires complex approach considering voluntary versus coercive production
While both speakers acknowledge the serious issue of self-generated content, they have different perspectives on how to address it. Robbert presents alarming statistics about prepubescent self-generation and focuses on prevention through education and addressing root causes like grooming. Sabine argues for a more nuanced legal approach that considers the context and voluntary nature of some adolescent content, challenging the assumption that all self-generated content should be treated the same way legally.
Human rights | Legal and regulatory
Overall assessment
Summary
The main areas of disagreement center around the balance between privacy and safety measures, the appropriate focus between platform regulation versus law enforcement strengthening, implementation approaches for age verification, and the scope of multi-stakeholder collaboration in cross-platform efforts.
Disagreement level
The disagreement level is moderate but significant for policy implications. While speakers largely agree on the severity of the problem and the need for comprehensive solutions, their different emphases on technical solutions versus legal frameworks, privacy preservation versus detection capabilities, and industry-led versus multi-stakeholder approaches could lead to very different policy outcomes. These disagreements reflect broader tensions in internet governance between security, privacy, and human rights that require careful balancing rather than simple resolution.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasized that age assurance is crucial for child protection and that privacy-preserving methods exist, though implementation remains limited across platforms.
Age assurance is a key component but only two of 50 services systematically assure age on account creation
Privacy-preserving age verification mechanisms exist without being intrusive
Human rights | Legal and regulatory
Both speakers recognized the significant financial component of online child exploitation and the important role financial institutions can play in detection and prevention.
Financial extortion is a major component of crimes against children online
Financial sector can play crucial role in flagging suspicious payments related to CSEA
Economic | Legal and regulatory
Both speakers highlighted the complexity of emerging technologies in child exploitation and the need for more sophisticated legal and technical responses to address these evolving threats.
AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement
Technology-facilitated child sexual abuse is not equally criminalized across the world
Cybersecurity | Legal and regulatory
Both emphasized the critical importance of education and prevention strategies, particularly in school settings, as essential components of comprehensive child protection approaches.
Prevention through awareness raising and school materials is crucial alongside enforcement
Digital literacy and media literacy are powerful tools requiring sensitive implementation
Sociocultural | Human rights
Takeaways
Key takeaways
Child sexual exploitation and abuse (CSEA) in live streaming contexts requires a multi-layered, technology-neutral approach combining awareness raising, industry action, regulation, and law enforcement
Safety by design principles should be integrated into platforms from the outset, incorporating both front-end prevention and back-end detection technologies while preserving privacy
Cross-platform collaboration is essential as bad actors exploit multiple services across the tech ecosystem, requiring secure signal sharing between companies
Children’s voices and participation must be centered in designing protection solutions, from technology development to criminal justice processes
Self-generated content represents a significant portion of CSEA cases (one-third of reports, with 75% involving prepubescent children), requiring nuanced approaches that consider voluntary versus coercive production
Multi-stakeholder approaches involving civil society, academia, tech industry, law enforcement, and affected communities are crucial for effective solutions
Transparency in platform practices, moderation techniques, and tool effectiveness is essential for policy development and accountability
Age assurance technologies can be implemented in privacy-preserving ways but are currently underutilized across platforms
Regional differences in legal frameworks, enforcement capacity, and cultural contexts require tailored approaches while maintaining international cooperation
Privacy and safety should be viewed as complementary rather than opposing values in developing solutions
Resolutions and action items
Tech Coalition’s Lantern program will publish results of financial payment provider pilot later in summer 2024
OECD will share Financial Action Task Force research on disrupting financial flows related to live stream sexual abuse
Internet Watch Foundation offered to collaborate with tech companies to validate tool effectiveness using their data
Session organizers committed to producing a summary report of the discussion
Participants agreed to continue conversations at the IGF village and through ongoing multi-stakeholder collaboration
Need to strengthen Safer Internet Center networks globally to provide safe spaces for reporting and support
Companies should improve transparency reporting on moderation practices, especially for live streaming content
Development of privacy-preserving age assurance mechanisms should be prioritized across platforms
Unresolved issues
How to address the resource gap where CSEA cases are growing globally while government and industry resources are declining or stable at low levels
Lack of standardized evaluation criteria for assessing effectiveness of age assurance and content detection technologies
Insufficient law enforcement capacity and training, particularly in developing countries, to handle cross-border CSEA cases
Gap between content detection/reporting and actual prosecution, with many cases falling through criminal justice system cracks
How to balance AI regulation policies with the need to address AI-generated synthetic CSAM
Recidivism on platforms where bad actors can easily recreate accounts after being banned
Limited information sharing across platforms and insufficient safeguards against repeat offenders
Trauma-informed approaches needed in criminal justice systems for child victims of technology-facilitated abuse
How to make reporting mechanisms more accessible and understandable for children
Addressing the weaponization of privacy arguments versus legitimate privacy protection needs
Suggested compromises
Using session metadata and behavioral signals rather than content scanning to preserve privacy while detecting abuse
Implementing on-device machine learning for content detection that doesn’t require third-party access to content
Developing content-oblivious rather than content-aware moderation methods for most harmful content
Creating multi-stakeholder standard-setting processes that include Global South representation and human rights perspectives
Balancing platform responsibility with strengthened law enforcement and criminal justice frameworks
Combining industry self-regulation with government oversight through frameworks like the EU Digital Services Act
Using privacy-preserving age assurance technologies that minimize data collection while protecting children
Implementing graduated responses from design-based features to content detection to human oversight
Developing collaborative approaches between tech companies and educational institutions for curriculum development rather than just tool deployment
Thought provoking comments
I think it’s really important to acknowledge up front that protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health.
Speaker
Lisa Robbins
Reason
This reframes the entire discussion by establishing that child protection isn’t separate from rights promotion but IS rights promotion. It challenges the common framing that positions safety and rights as competing interests.
Impact
This foundational comment set the tone for the entire session, establishing a rights-based framework that other speakers consistently referenced. It prevented the discussion from falling into the typical ‘safety vs. rights’ dichotomy and instead positioned child protection as inherently rights-affirming.
When we look at the victims, for instance, at the numbers of the hotline at Off Limits, of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves… And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children.
Speaker
Robert Hoving
Reason
This statistic fundamentally challenges assumptions about online child exploitation by revealing that a significant portion involves very young children creating content themselves, suggesting complex issues around grooming, risk behavior, and early exposure rather than just external predation.
Impact
This revelation shifted the conversation from focusing primarily on external threats to recognizing the complexity of self-generated content and the need for different intervention approaches. It influenced subsequent discussions about age verification, education, and the nuanced nature of child-generated material.
So for example, the bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms.
Speaker
Sean Litton
Reason
This comment illuminates the sophisticated, multi-platform nature of modern child exploitation, demonstrating why isolated platform responses are insufficient and why cross-platform collaboration is essential.
Impact
This insight fundamentally changed how participants viewed the problem scope, leading to extensive discussion about cross-platform solutions like Project Lantern and the need for industry-wide collaboration rather than individual platform responses.
I think that oftentimes privacy and safety are placed in tension with each other, and I find that framing to be a little bit difficult because I think privacy and safety are very much in line with one another… encrypted services are one of the few and potentially actually only place where they cannot see the content of the communication, and that becomes more and more important as we see the world changing in front of our eyes.
Speaker
Kate Ruane
Reason
This challenges the dominant narrative that encryption and child safety are inherently opposed, arguing instead that privacy technologies can enhance safety for vulnerable populations including children.
Impact
This comment sparked significant debate and pushback from audience members, creating one of the most contentious moments in the discussion. It forced participants to grapple with the complexity of balancing different safety needs and challenged simplistic solutions that would weaken encryption.
I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized.
Speaker
Sabine Witting
Reason
This comment critically examines the policy focus, arguing that the emphasis on platform regulation may be misplaced when the criminal justice system itself is failing children. It highlights systemic gaps in law enforcement and court procedures.
Impact
This shifted the policy discussion from focusing primarily on platform responsibilities to examining the broader ecosystem of child protection, including law enforcement capacity and child-friendly justice procedures. It influenced later discussions about resource allocation and multi-stakeholder approaches.
One of the things they make us understand is that virtually almost every day, there’s something that they encounter in the space… they felt that it’s important for platforms to be faster in identifying incidences of abuse and rescuing young people if they show fall victims, prosecution of perpetrators, and also by also creating, making, having at the back of the mind of designers that children are using the platform.
Speaker
Aidam Amenyah
Reason
This brings the authentic voice of children from Africa into the discussion, emphasizing that young people want to be involved in designing solutions and that they experience these harms daily, not as rare occurrences.
Impact
This comment reinforced the importance of youth participation throughout the session and provided concrete evidence of the scale and frequency of the problem from children’s perspectives. It influenced discussions about design processes and the need to center children’s voices in solution development.
Overall assessment
These key comments fundamentally shaped the discussion by establishing a rights-based framework, revealing the complexity of modern child exploitation, and challenging conventional wisdom about technology solutions. Lisa Robbins’ opening reframing prevented the session from falling into typical safety-versus-rights debates, while Robert Hoving’s statistics about self-generated content forced participants to confront uncomfortable realities about very young children. Sean Litton’s cross-platform analysis shifted focus from individual platform solutions to ecosystem-wide approaches, leading to substantive discussion about industry collaboration. Kate Ruane’s defense of encryption created the session’s most contentious moment, forcing nuanced consideration of competing safety needs. Sabine Witting’s critique of policy priorities challenged the focus on platform regulation over criminal justice reform. Together, these comments elevated the discussion beyond surface-level solutions to examine systemic issues, power dynamics, and the need for comprehensive, rights-respecting approaches to child protection online.
Follow-up questions
How can we better address the gap between technology solutions discussed at conferences and the reality in developing countries, particularly regarding law enforcement capacity and investigative methods?
Speaker
Shiva Bisasa (Trinidad and Tobago)
Explanation
There’s a significant disconnect between advanced solutions being discussed and the practical challenges faced in developing nations where law enforcement lacks resources, skills, and understanding of current technologies to effectively investigate and prosecute cases.
Does Project Lantern contemplate signals from financial transactions to match perpetrator payments or payments from victims?
Speaker
Shiva Bisasa (Trinidad and Tobago)
Explanation
Understanding how financial transaction data can be integrated into cross-platform detection systems could help identify and disrupt the economic aspects of child exploitation networks.
How can we solve the dilemma of growing CSAM cases worldwide while resources (government, private, industry) are declining or remaining stable at low levels?
Speaker
Sérgio Tavares (SaferNet Brasil)
Explanation
This addresses a fundamental sustainability challenge in combating online child exploitation – the mismatch between increasing problem scale and insufficient resource allocation.
How can we develop clear evaluation criteria to assess the effectiveness and robustness of age assurance technologies, and understand who is adversely impacted by these technologies?
Speaker
Sabine Witting (Leiden University)
Explanation
Current industry standards for age assurance lack transparency and multi-stakeholder input, particularly from Global South perspectives, making it difficult to evaluate their true effectiveness and potential harms.
How can we establish better benchmarks for content detection tools in live streaming, particularly to verify whether marketed technologies actually work effectively?
Speaker
Kate Ruane (Center for Democracy and Technology) and Andrew Kempling (Internet Watch Foundation)
Explanation
Many organizations market live streaming detection technologies without sufficient validation of their effectiveness, and there’s a need for independent testing and benchmarking.
How can we better understand the data sets used to train content detection tools for live streaming, including how data is sourced ethically and whether consent exists for training data use?
Speaker
Kate Ruane (Center for Democracy and Technology)
Explanation
Transparency is needed regarding how AI tools for detecting CSAM in live streams are trained, particularly concerning the ethical sourcing of training data and consent issues.
How can we develop better tools and processes to separate synthetic/AI-generated CSAM from real CSAM to help law enforcement prioritize cases involving actual children in harm?
Speaker
Kate Ruane (Center for Democracy and Technology)
Explanation
As AI-generated content becomes more prevalent, law enforcement needs efficient ways to distinguish between synthetic and real abuse material to focus resources on rescuing actual victims.
How can we frame digital literacy initiatives to be sensitive and understandable while not breaching appropriate boundaries when discussing CSAM with children?
Speaker
Cosima (UK Government)
Explanation
There’s a need to develop age-appropriate educational approaches that inform children about online safety without exposing them to inappropriate content or concepts.
How can we navigate the tension between AI regulation restrictions and the need to regulate AI-generated CSAM?
Speaker
Cosima (UK Government)
Explanation
Some jurisdictions are considering broad restrictions on AI regulation, which could inadvertently prevent regulation of AI-generated child sexual abuse material.
How can we establish a proper official body to unite platforms and law enforcement to address the jurisdictional gaps and resource limitations that allow repeat offenders and crime syndicates to operate?
Speaker
Audience member (Finnish Green Party)
Explanation
Current fragmented approaches between platforms and law enforcement across different jurisdictions create gaps that allow perpetrators to continue operating, suggesting need for better coordinated international response mechanisms.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
