WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally

25 Jun 2025 09:00h - 10:00h

WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally

Session at a glance

Summary

This discussion focused on governance responses and challenges related to the proliferation of sexual deepfakes targeting teenagers across the internet. The panel examined how AI tools designed to create deepfake images have dramatically increased, with nearly 35,000 AI models available for public download on one platform, many specifically marketed for generating non-consensual intimate imagery. The speakers presented case studies from South Korea, where hundreds of secret Telegram chat rooms were discovered sharing deepfake sexual videos of students, affecting over 500 schools and shocking the entire country.


Korean representatives explained that deepfake-related sex crimes increased seven-fold from 156 cases in 2021 to 1,202 cases in 2024, with most perpetrators being teenagers themselves who often view the activity as harmless fun rather than serious criminal behavior. The Korean government responded with educational guidebooks for different age groups and developed technical innovations including a Korean deepfake dataset achieving 96% detection accuracy and specialized tools to identify content involving minors. However, the phenomenon of “platform hopping” emerged as perpetrators shifted from Telegram to other platforms to avoid detection.


Education expert Janice Richardson emphasized the need for cross-platform collaboration and proper teacher training, highlighting successful programs in France, the Netherlands, and Morocco that incorporate digital citizenship education. Brazilian representative Juliana Cunha reported a historic spike in child sexual abuse material reports, with 90% involving Telegram, and stressed that this issue reflects broader systemic gender inequalities requiring cultural prevention measures beyond legal responses. Participants agreed that combating sexual deepfakes requires coordinated multi-stakeholder approaches combining legal frameworks, educational initiatives, technical solutions, and cultural change to protect teenagers globally.


Keypoints

## Major Discussion Points:


– **The Scale and Impact of AI-Generated Sexual Deepfakes Targeting Teens**: The discussion highlighted alarming statistics, particularly from South Korea where over 500 schools were affected by deepfake videos, with cases rising from 156 in 2021 to 1,202 in 2024. The speakers emphasized how easily accessible AI tools have democratized the creation of non-consensual intimate imagery, with many perpetrators being teenagers themselves who often view it as “just for fun.”


– **Legal and Regulatory Challenges Across Jurisdictions**: Panelists discussed the inadequacy of current legal frameworks to address deepfake crimes, noting issues with cross-border enforcement, the difficulty of prosecuting cases involving modified images, and the need for stronger international cooperation. The conversation highlighted how perpetrators can easily bypass restrictions using VPNs and platform-hopping techniques.


– **Educational and Cultural Prevention Strategies**: The discussion emphasized the critical need for comprehensive digital literacy education, starting from early childhood, that goes beyond technical awareness to address underlying cultural issues around gender-based violence and consent. Speakers shared innovative approaches including magician-delivered presentations, peer education programs, and multi-stakeholder curriculum development.


– **Platform Responsibility and Technical Solutions**: The conversation addressed the role of tech companies in both enabling and preventing harm, discussing detection technologies, content moderation challenges on encrypted platforms like Telegram, and the need for proactive measures to identify and remove harmful content before it spreads widely.


– **Multi-Stakeholder Collaboration and Victim Support**: Panelists stressed the importance of coordinated responses involving governments, schools, tech companies, NGOs, and communities, while emphasizing the need for trauma-informed support systems for victims and the importance of listening to young people’s perspectives in developing solutions.


## Overall Purpose:


The discussion aimed to examine governance responses and challenges related to AI-generated sexual deepfakes targeting teenagers from multiple stakeholder perspectives. The workshop sought to identify effective legal and educational measures, explore collaboration strategies for incorporating digital literacy into school curricula, and develop proactive policies to prevent harm against teenagers globally.


## Overall Tone:


The discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about the rapidly escalating problem while remaining solution-focused and constructive. The tone was professional yet passionate, with participants sharing both alarming statistics and innovative approaches. There was a consistent emphasis on the need for immediate action combined with long-term cultural change, and the conversation remained respectful and inclusive of diverse international perspectives and expertise levels.


Speakers

**Speakers from the provided list:**


– **Kenneth Leung** – Netmission Board Advisor, Asia Pacific Policy Observatory advisor, workshop moderator from the UK


– **Ji Won Oh** – Netmission Ambassador, holds bachelor’s degree in Latin American Studies and Political Science and Master’s degree in Political Science in International Relations, providing youth perspective


– **Yi Teng Au** – Netmission Ambassador from technical community, majors in Computer Science, Microsoft Certified AI Engineer


– **Janice Richardson** – Educator with 50+ years experience across multiple countries (Australia, Europe, Africa), sits on Safety Advisory Board of META and Snapchat, partner in European Commission and Council of Europe projects focusing on AI impacts on education


– **Juliana Cunha** – From Safer Nets Brazil, holds bachelor’s degree in psychology and master’s degree in culture and society, coordinates National Helpline for Online Safety, NGO perspective


– **Andrew Campling** – Trustee with the Internet Watch Foundation


– **Robbert Hoving** – From Safer Internet Centre in the Netherlands and InHope


– **Maciej Gron** – From Polish Research Institute NASK and hotline dujournet.pl, lawyer


– **Torsten Krause** – Affiliated with Digital Opportunities Foundation based in Germany


– **Yuri Bokovoy** – From Finnish Green Party


– **Participant** – Multiple unidentified participants with various questions and comments


**Additional speakers:**


– **Mariana** – Colombian, works at DataSphere Initiative, leads Youth for a Data Future Project


– **Frances** – Part of Youthdig organization


– **Claire** – Student from Hong Kong


– **Sana** – From NetMission


Full session report

# Comprehensive Report: Governance Responses to Sexual Deepfakes Targeting Teenagers


## Executive Summary


This international workshop brought together experts, policymakers, educators, and youth representatives to examine the escalating crisis of AI-generated sexual deepfakes targeting teenagers. The discussion centered on alarming data from South Korea, where deepfake-related sex crimes increased seven-fold from 156 cases in 2021 to 1,202 cases in 2024, affecting over 500 schools. Supporting perspectives from Brazil and Europe revealed this as a global phenomenon requiring coordinated responses. The panel emphasized that effective solutions must address both immediate technical and legal challenges while tackling underlying cultural factors, particularly gender-based violence and inequality.


## Global Scale of the Problem


### The Korean Crisis


**Ji Won Oh** and **Yi Teng Au** presented comprehensive data on South Korea’s deepfake crisis, where hundreds of secret Telegram chat rooms shared deepfake sexual videos of students across more than 500 schools nationwide. The seven-fold increase in deepfake-related crimes shocked the country and prompted urgent government action.


Particularly concerning was the attitude of perpetrators, with 54% claiming they created deepfake content “for fun.” **Ji Won Oh** noted that “many students don’t care. Because they think it’s just funny and laugh and share the videos,” highlighting a fundamental disconnect between the severe impact on victims and perpetrators’ casual attitudes.


### International Perspectives


**Juliana Cunha** from Safer Nets Brazil reported a historic spike in child sexual abuse material reports in 2023, with AI-generated content as a key factor. She emphasized that “this problem already affected large schools in Brazil, especially private schools, with several cases being reported by media outlets.” Particularly troubling was the finding that 90% of messaging app reports involved Telegram.


**Janice Richardson** provided European and African context, noting reporting challenges in countries like Morocco and Tunisia due to cultural factors around shame and humiliation. She emphasized that “reporting challenges exist due to humiliation, especially in countries like Morocco and Tunisia.”


## Root Causes and Systemic Challenges


### Cultural and Gender Dimensions


A crucial insight emerged regarding the cultural roots of the problem. **Juliana Cunha** argued that “the misuse of AI to create sexualized images of peers is not a tech issue or a legal gap. It’s a reflection of broader systemic gender inequalities. That’s why prevention must also be cultural.”


This reframed the discussion from viewing deepfakes as merely a technological problem to understanding them as manifestations of deeper cultural attitudes about gender, consent, and power dynamics.


### Technical Accessibility


The discussion revealed concerning ease of access to harmful AI tools. **Yi Teng Au** noted that “accessibility of AI tools like stable diffusion makes content creation too easy despite safeguards.” **Robbert Hoving** highlighted that “search engines easily provide access to deepfake creation apps when searched.”


### Legal Framework Gaps


Multiple speakers identified significant gaps in current legal frameworks. **Yi Teng Au** explained that “laws against deepfakes are unclear because modified images create legal ambiguity about prosecution.” **Kenneth Leung** noted jurisdictional challenges, while **Maciej Gron** highlighted how legal systems struggle with situations involving both child victims and perpetrators.


### Platform Governance Challenges


**Andrew Campling** from the Internet Watch Foundation emphasized that “end-to-end encrypted messaging services present biggest challenge for blocking CSAM sharing.” The phenomenon of “platform hopping,” where perpetrators shift between platforms to avoid detection, further complicates enforcement efforts.


## Current Responses and Solutions


### Educational Innovations


The discussion showcased several innovative educational approaches:


**South Korea**: The Ministry of Education published five guidebooks tailored to different age groups addressing deepfakes.


**Brazil**: **Juliana Cunha** described creating educational resources combining legal guidance with real-life cases and interactive activities, emphasizing practical, relatable content.


**International Models**: **Janice Richardson** shared successful approaches including Morocco’s cascade training model with resource persons in schools, the Netherlands’ six-week transition courses, and Scandinavian countries using magicians to deliver educational content.


### Technical Developments


South Korea has pioneered several technical solutions. **Yi Teng Au** described the creation of a Korean DeepFake dataset that achieved 96% accuracy in detecting altered videos, and Seoul Metropolitan Government’s development of specialized detection tools targeting content involving minors.


### Multi-Stakeholder Initiatives


**Andrew Campling** described the Internet Watch Foundation’s anonymous reporting systems for illegal images including AI-generated content. Various speakers emphasized successful partnerships between government agencies, schools, and civil society organizations.


## Key Debates and Perspectives


### Youth-Centered Approaches


A significant theme was the importance of centering young people’s voices. **Juliana Cunha** emphasized that “our core belief is the best way to protect children is by listening to them. This project challenged the usual top-down approach. Instead of responding with moral panic or punitive measures, we are asking how do teens actually experience this? What do they think would help?”


**Janice Richardson** noted that “young people also turn to me and say, you should be educating the adults about this because very often, it’s them, the ones that are suffering, the ones that are doing this, and the ones that don’t have the opportunity to tackle it.”


### Regulatory Tensions


Disagreement emerged around regulatory strategies. **Frances** from Youthdig advocated for “a real crackdown on these kind of applications,” while others emphasized the complexity of enforcement and need for nuanced approaches.


**Yuri Bokovoy** raised concerns that “regulations protecting children risk misuse by authoritarian governments to silence free speech,” highlighting the delicate balance required in crafting effective legislation.


### Privacy Versus Protection


Tension emerged between privacy rights and child protection, particularly regarding encrypted communications. **Torsten Krause** noted that the European Union has been “discussing compromising privacy to detect CSAM in encrypted environments for three years” without resolution.


### Industry Accountability


**Janice Richardson** challenged technological priorities: “If we’re so clever with technology, why can’t we make something that when once we’ve put an image online, it becomes indelible, it becomes unchangeable. I’d like more efforts from the tech industry.”


## Areas of Consensus and Collaboration Needs


### Multi-Stakeholder Imperative


There was unanimous agreement that addressing sexual deepfakes requires coordinated efforts across sectors. **Juliana Cunha** emphasized that a “coordinated response needed bringing together tech companies, public institutions, society and academia.”


### Collaboration Gaps


**Janice Richardson** identified significant gaps: “cross-platform collaboration lacking with companies needing to share knowledge more effectively” and “industry should partner with education in curriculum development rather than just providing tools.”


### International Action


Multiple speakers called for enhanced international coordination. **Janice Richardson** emphasized that “international action needed with people joining together to pressure search engines and platforms.”


## Future Directions and Recommendations


### Immediate Actions


– Create international coalitions to pressure search engines to remove access to deepfake creation apps


– Develop anonymous reporting mechanisms to address cultural barriers around shame


– Implement cascade training models with resource teachers in schools


### Medium-term Goals


– Develop detection datasets tailored to local needs, following South Korea’s example


– Strengthen industry accountability measures against products designed to violate rights


– Create comprehensive legal frameworks addressing cases with minor victims and perpetrators


### Long-term Cultural Change


Participants emphasized that sustainable solutions require long-term cultural interventions addressing underlying gender inequalities and attitudes about consent and digital behavior.


## Conclusion


The workshop revealed that addressing sexual deepfakes targeting teenagers requires a fundamental shift from reactive to proactive approaches. The crisis reflects broader systemic inequalities that cannot be addressed through technological or legal solutions alone, but require sustained commitment to cultural change and gender equality.


The strong consensus among international stakeholders provides a foundation for developing comprehensive global strategies, while identified disagreements highlight areas requiring further dialogue. The urgency of the situation, combined with the complexity of effective responses, underscores the need for immediate action on multiple fronts while maintaining focus on long-term prevention through education and cultural transformation.


Most importantly, the conversation emphasized that solutions must be youth-centered, culturally sensitive, and implemented through coordinated international collaboration. The participants’ commitment to continued collaboration provides hope for developing more effective responses to this critical challenge facing young people globally.


Session transcript

Kenneth Leung: Good morning, fellow internet governance practitioners and answers. Glad to see you here in the morning, especially this is the first session of the day. We also have people from outside of this room and all around the world joining us online. So across this hour, we are to examine governance responses and challenges from different sectors, as well as stakeholder perspectives towards the germination of sexual defects across the internet. Dramatic rise. This is what an Oxford Internet Institute study called out on the number of AI tools specifically designed to create deepfake images of identifiable people. This very timely study, formally just published this Monday, unveiled there were nearly 35,000 AI models available for public download on one platform service for generative AI, many of which are even marketed or with the intention to generate NCIIs, non-consensual intimate imagery. The Digital Service of Concern has responded to the Oxford study and is taking action in an exposed manner. And some governments are also taking actions. Take where I’m based as an example, the UK’s 2025 Data Use and Access Act has just become effective last Thursday, with the provision criminalizing the creation and requesting the creation of purported intimate images. However, the additional safeguards only applies to adults as such behaviors, as explained by the government, targeting minors have already been covered by the law. But is that enough? Especially safeguards in the age of generative AI for teenagers, the so-called in-between phase from the innocence of childhood to adulthood. This workshop will directly and exactly be focusing on this with three questions. Number one, what legal and educational measures are most effective in addressing the creation and spread of sexual defects among school-going teens? 2. How can different stakeholders collaborate to ensure that school curricula incorporate digital literacy and awareness about the dangers of sexual defects? 3. What proactive policies can countries implement to anticipate technological changes and prevent sexual defects harms against teenagers globally? My name is Kenneth Leung, Netmission Board Advisor. Joining me today on stage, we have from my left, Ms. Oh Ji Won Oh, Netmission Ambassador, bringing in the youth perspective. Mr. Au, Yi Teng, also a Netmission Ambassador from the technical community. Ms. Janice Richardson inside SA offering views from the education field, and Ms. Juliana Cunha from Safer Nets Brazil, giving insights from an NGO standpoint. And we also have you all. After initial remarks by our speakers, we invite you to chime in with either questions, comments, perspectives, or experience on the topic. You will have two minutes to share your thoughts either in front of this mic or online by using the raise hand function. So, if you have more thoughts and resources that you would like to share after this session, we would love to have them, and please share it on this website, csit.tk, which will be closed after one week of this workshop for us to synthesize the discussion right now and on the comments platform to consolidate into a report next month. So, it’s csit.tk. This platform will be up right now, and I will show it up one more time later. So, hold your thought for now in the first 20 minutes, and let’s hear from Ji Won Oh and Yi Teng on youth and technical perspective in combating sexual defects and safeguarding teens globally. Oh Ji Won Oh holds a bachelor’s degree in Latin America. American Studies and Political Science and a Master’s degree in Political Science in International Relations. Yi Teng is a technical person for the workshop. Yi Teng majors in Computer Science and is a Microsoft Certified AI Engineer. Over to you.


Ji Won Oh: Okay. Hello, everyone. Today, I will talk about how defects affect students’ lives and mental health. So, first, let me explain what a defect is. So, defects are highly realistic video, audio, or image. For example, there are places generated using AI. The technologies that create defects include generated adversarial networks and machine learning. So, with defect tools, people can copy someone’s face and voice without permission. In August last year, a big problem was found in Korean schools. Hundreds of secret chat rooms on Telegram were sharing defect sexual videos. These videos used the real face of students and were shared in elementary, high, and middle school students. So, this shocked the whole country. With technology and social media platforms, anyone can now make a watch a sexual video easily. It’s not just others. Many young people are involved. So, in 2021, there were only 156 policy reports about defect news sex crimes in Korea. But in 2024, the number increased to 1,202 cases. That’s about seven times more than just three years. The most serious problem is this. Most of the people who made these defects are teenagers. So, when students hear the news, will they be scared and think that? They should never do it again? Not sure. Many students now use defect tools. These tools are easy to find online. Even young students can make defects. But here’s the problem. Actually, many students don’t care. Because they think it’s just funny and laugh and share the videos. They don’t think about the pain this causes. So we can think many students feel shocked, scared, and frustrated when they see defects. But victims feel anxious and safe. They also suffer from social stigma. Because of fear, students may lose trust in their students. Defects can hurt someone’s reputation and make some feel helpless. There are four main reasons why defect problems are growing on the screen. There are many reasons why sexual defect crimes continue to happen in Korea. Even after the lockdown, it is still unclear whether the new law is strong enough to stop these crimes. Some people say that more action is needed. Especially to make internet companies act faster. Even when videos are illegal, they can spread for a long time if the removal takes too long. And then some experts say that internet services are not enough. Some experts say that internet service providers must be more responsible. They should block, monitor, and prevent defects contents before it spreads. Also, the law should not only punish the people who make defects. It should also punish those who ask others to create them. Because technology changes so fast, we need laws that are clear, strong, and stable even for new types of defect crimes. Second, this issue is not only about sex crimes. It is part of the bigger problem. Many misinformation affect Korean society. For example, fake videos of politicians and celebrities can halt democracy by spreading lies. That’s why we need a unified national response with the government, school, and companies working together. It’s finally one of the most important causes of lack of education. Many young people don’t really understand how serious these crimes are. Some even think it’s funny or harmless. That’s why schools need to teach students what the fakes are, how dangerous they are, and what would happen in some breaks below. Also, people need better digital literacy, the ability to understand what is real or fake online. We must help young people and elders be smarter and more careful online. So what can we do? So where do fakes can happen? Politicians start investigations. They try to find who made me share the video. A school separated victim and offender. They also give counseling and support to the victims. There are some legal protections, but they are not enough right now. We need more stronger laws to punish the fakes crimes in all economies. So we also need to educate students of race awareness. It is important to support the victims and protect their safety. Okay, thank you so much.


Yi Teng Au: Hi, my name is Yiteng. So to put things into context, here are some numbers I want to share. In South Korea, over 500 schools were affected by deepfake videos and photos, many taken secretly at school or from social media like X and Instagram. A December survey of 2,145 middle and high schoolers showed that 54% said offenders just did it for fun. Other reasons included curiosity or thinking that the consequences were minor, highlighting a lack of awareness about the seriousness. In response, The Ministry of Education published five guidebooks this April. They are tailored to different age groups. There’s a cartoony version for elementary students, and separate editions for middle and high schoolers, teachers and caregivers. The guidebooks cover three key situations. What if I’m the victim? What if someone around me is? And what if something I did caused harm? For the technical folks here, I’ll highlight three innovations. First, the Korean DeepFake dataset, released in 2021, contains 2.73 terabytes of photos and videos. It was created to address the lack of Korean representation in existing datasets. With this, Seoul National University students achieve 96% accuracy in detecting altered videos. Economies without such datasets might consider developing one tailored to their local needs. Second, the Seoul Metropolitan Government developed a detection tool, enhanced in May 2025, targeting illicit content involving minors. It identifies items such as schoolbooks, uniforms, and dolls, and even youth slang, flagging underage content even when faces are invisible. It also scans search terms and drafts multilingual reports, depending on the site’s host country. Lastly, I will share about a phenomenon called platform hopping. Many DeepFake crimes in South Korea began on Telegram. But as the company now actively cooperates with South Korean authorities, on a forum, DC Insight, it was noted that perpetrators have shifted to other platforms, making detection harder due to lower data logging. Thank you. Thank you.


Kenneth Leung: Thank you Jiwon and Yitun for sharing the Korean case, but also what is the Korean government after the case happens. And I just want to also plug here the Asia Pacific Policy Observatory that I advise on. As this particular case, was also heavily debated and spotlighted in one of our latest analyses on how the recent advancement of AI capabilities are transforming online safety and security, which includes challenges in the search of AI-generated child sexual abuse material and AI-powered gender-based violence. So I invite you to take a look at when it’s out this week during IGF, and I’ll be sure to send a link up at that commenting platform, CSIT.tk. So, and now we’ll love to hear from Janice for thoughts in education and the private sector’s perspective. Janice Richardson has been an educator for 50-plus years in half a dozen different countries, including Australia, Europe, and Africa. She sits on the Safety Advisory Board of META and Snapchat, and is a partner in the European Commission and Council of Europe projects focusing on the impacts of AI, misinformation, algorithms, etc. on democracy at all levels of education. Janice, please.


Janice Richardson: Thank you. And first of all, thank you very much to all of those people that I reached out to in the UK, Poland, France, the Netherlands, who gave me information about what are the solutions. But let’s begin with what are the challenges. Schools still have a tendency to post the face, the image of their pupils in sports, in all sorts of activities. And this is the first problem. There are so many images out there. Maybe they were consensual in the beginning, but they can very easily be used when creating deep fakes. Secondly, it’s the availability of tools such as Notify, Undress, Dressed. I don’t know. There are so many out there. And I find it absolutely amazing that they can… still exist on the market. Then, of course, we have the enforcement challenges. We’ve done quite a bit of work in Morocco, training the judiciary area so that they really understand how you collect electronic proof, how you use electronic proof. But not all laws are adapted to this type of proof, and therefore we do need some legal amendments in many countries. The media, the way they report these things, we’ve also started training the media to make them understand that being spectacular may be a good way to get a lot of people to come to your website, but it’s certainly not helping the victims, and it’s actually calling for a lot of copycat behaviour. Then I would cite the lack of cross-platform collaboration. I know there are projects such as Lantern where companies, social media in particular, come together to share knowledge, but the problem with this is that it’s across platforms. It happens in many different layers, and until industry joins up, I think it’s going to be very difficult to find a solution. Industry needs to be a partner with education. It shouldn’t just be there supplying tools or pushing their tools, I could say, onto the education sector and then throwing out bits of education to help young people. It should be there when we’re developing curricula, finding ways together that we can use real-life cases, real-life resources, in a way that will be much Thank you very much for joining us for this evening’s discussion on how to be a bit more impactful for learning. There are, of course, many issues when we look at education systems because few teachers have the training that’s necessary to be able to tackle this issue. We have teachers from every one of the 86 regions of Morocco. They have trained in Cascade. Our aim is to have two teachers who fully understand the issue who receive regular updates in every school so that if there are two teachers in every school, two resource persons, there should be a very fast way to escalate the issues. There are lots of very interesting programs out there. I can cite what’s going on in France, for example, where law students at the Sorbonne University have come together and created a poster competition because they feel if they create the framework for a poster competition, providing little snippets of the law so the public becomes more familiar with the law, then everyone can create posters which will be meaningful and informative and reach the young people who are very much concerned by this issue. I really like something that happens in the Netherlands too. They have lots of television programs, debates, but every school there has the freedom to choose the way that they’re going around this issue. But when a young person goes from elementary school to middle school, they go through a six-week course on health, on technology, on all of this. Thank you very much. So we’re going to start with the issues that can really help them tackle these issues, these problems, because you know the big shift when they go from elementary school through to secondary school. We have a very innovative project running in Scandinavia, thanks to the support of Huawei. I train a magician, and I work with the magician to find tricks that are going to reinforce the messages that I will put into a presentation. Then it’s the magician that delivers in one hour sessions in schools, they deliver what I’ve prepared and what helps them tackle very, very interesting projects in a fun way, so that kids are on board, are interested, but they don’t feel threatened in any way. There’s one big problem I’ve noticed in all of the countries that I work in, and it’s reporting. The humiliation if you have to report that an image of you has been shared, especially in countries like Morocco or Tunisia. So we need to take another approach to reporting. We need to educate children from the cradle to understand, one, the importance of human dignity, two, that when something is not going right, there are ways to say it that are not hurtful, that are not telling tales, there are the right ways to do it. I should then mention the helplines. This has been a super initiative across Europe. I actually got figures from the Netherlands, which shows me that in about 6,500 cases over the past year, we could say that about 5% are about AI-generated sexual fake profiles. and the others. I’m going to talk about the nude profiles, but it’s early days. It’s going to get worse. Therapy, support for the victims. These are all extremely important areas. But when I talk to young people, they say, actually, it’s not the nude profiles that are bothersome. If someone puts a bikini, but then a very sexual stance, is it a nude profile? This is where it really raises issues. If there’s a little bit of clothing, then they think they’re out of the category of a sexual fake profile. Young people also turn to me and say, you should be educating the adults about this because very often, it’s them, the ones that are suffering, the ones that are doing this, and the ones that don’t have the opportunity to tackle it. So I’d say that’s a broad sweep of where I think we should go, of the education projects we should set up. But as many of you know, I believe it’s all about digital citizenship. Understanding that you are a citizen, you have an obligation for all of those around you, regardless of what it is. You also have an obligation to share your knowledge, because if we’re here today, we have knowledge about this subject. So we need to share it, so that it really becomes a grassroots movement to educate everyone on how to tackle this plague. Thank you.


Kenneth Leung: Thank you, Janice. It’s indeed a broad sweep of issues that you’ve mentioned, and lots of good solutions. And you also mentioned about cases from Scandinavia, Morocco, the Netherlands, Tunisia. That also means this issue, sexual defects, are really transcending the border, and it’s a global issue. So this is why we are here, and I would like to now turn the floor to Miss Juliana Cunha for a Latin American perspective and some of the work that you do. Juliana Cunha holds a bachelor’s degree in psychology and a master’s degree in culture and society. At Safer Nets Brazil, Juliana coordinates the National Helpline for Online Safety, interacting with and counseling children, teenagers, and adults about sexting, cyberbullying, sexual extortion, and other risks online. In this regard, she also collaborated with media outlets and advertising bodies to create national and international award-winning campaigns. So after Juliana’s presentation and sharing her perspective, I would invite everyone who wants to share to come up to the floor in front of the mic on this side, or for Zoom participants, please do type in your question and we’ll consolidate and address those questions. So the floor is yours. Over to you. Thank you.


Juliana Cunha: Thank you. Good morning. I’d like just to brief introduction about Safer Nets work. Safer Nets is a 20-year-long nonprofit organization dedicated to establish a multi-stakeholder approach to protect human rights in digital environments. We work as a Safer Internet Center, which coordinates actions in three pillars of online safety. The Brazilian National Cyber Chief Line, a hotline to which users can report anonymously crimes against human rights. The National Web-Based Helpline. which offers one-to-one online conversation about different risks and provides support to children, young people, families and educators about online safety. And the third pillar is the country’s awareness and education hub, which is responsible for the educational activities involving workshops with students, educators, families developing materials and carry out campaigns on digital citizenship. About the context, the Brazilian context, in 2023 SaferNet reported a historic spike in reports related to child sexual abuse material online in Brazil. And a key factor was the rising of the use of AI tools such as notifying apps, as mentioned, and bots by young people to generate and share fake notes of classmates. This problem already affected large schools in Brazil, especially private schools, with several cases being reported by media outlets. The new trend challenges us to find an appropriate response, especially due to the fact that victims and perpetrators are minors, and the boundary between sexual experimentation and sexual abuse is becoming a little bit blurred. And this phenomenon, I think, is reshaping the way young people perceive sexuality, relationship and consent. I would like to highlight… increasing in reports in Brazil, especially involving Telegram. As I mentioned before, this is the huge challenge for us in Brazil right now because we have in 2023-24, 90% of the reporters related to messaging apps in Brazil involving Telegram reports. WhatsApp and Signal together account for the remaining 90% during the same period. When you notice, when we notify, the company limits this response for delecting the evidence with little cooperation with law enforcement agencies. It’s worth noting that at least 38 countries have legislation requiring legal, requiring digital platforms like Telegram to report to authorities when they are aware of hosting child sexual abuse material. In the United States, for example, the National Center Missing and Exploited Children, NECMEC, received 20 million industries submitted reports in 2023, none of them from Telegram. Last year, NECMEC saw an increase of 1,325 increasing reports involving generative AI going from 4,700 in 2023 to 6,700 reports in 2024. It’s an increase, a large increase in numbers. This alarming trend is also noted in Brazil. The persistence of systemic risk is inadequate response, moderation of illegal content, and telegrams non-compliance with Brazilian child protection laws led Stefanat to file a formal complaint against the company with the Federal Prosecutor’s Office in October last year. And this is the report based on this complaint. I think it’s interesting to show the proportion of the problem in Brazil. So this is the QR code to access the report. So to deep dive this trend, we run an ongoing project from Safe Online Tax Coalition, which will investigate how teens in Brazil are affected by the use of determinative AI to create and spread deepfake sexual images, also known as deep nudes. We are listening four key groups in these incidents, survivors whose images were manipulated and shared, adolescents who created and shared deep nudes, bystanders who witnessed the situation, and educators and caregivers. We will conduct a one-on-one interview with survivors and perpetrators, co-creation workshops with bystanders, and one session with educators. All activities are trauma-informed, confidential, and led by experienced professionals. Our expected outcomes include qualitative insights into how teens perceive and engage with AI-generated sexual images, practical recommendations to inform child-centered safety policies in Brazil, tangible resources for platforms to improve their trust and safety responses, a national awareness campaign to help teens identify, report, and resist this form of abuse, and a step-by-step methodological manual to replicate this work in other countries and contexts. One of the main challenges we face is going beyond legal and punitive measures. While accountability is essential, this phenomenon is deeply embedded in a culture that reproduces gender-based social norms, as mentioned previously. The misuse of DNA to create sexualized images of peers is not a tech issue or a legal gap. It’s a reflection of a broader systemic gender inequalities. That’s why prevention must also be cultural. We need long-term interventions focused on education, awareness, digital literacy in schools, as Janice mentioned, where social norms are being formed. This project aims not only to expose the harm caused by AI-generated sexual image content, but also to empower teens, educators, and communities for critically-engaged work. Our core belief is the best way to protect children is by listening to them. This project challenged the usual top-down approach. Instead of responding with moral panic or punitive measures, we are asking how do teens actually experience this? What do they think would help? And to conclude my thoughts, to effectively address this issue, we need a coordinated multi-stakeholder approach, response, bringing together tech companies, public institutions and society and academia. We have seen some initial steps. For example, META took action against a company circumventing advertising rules to promote notifying apps in these platforms. This is the first step, but that must be followed by other measures. We need the entire industry across sectors to take stronger action, especially against products that are deliberately designed to violate rights, exploit vulnerabilities and bypass existing safety standards. This requires clear accountability for enablers of harm, stronger safeguards built into platforms by design, and bold child-centered innovation that respects human rights. And just to conclude, I bring an example of education resource. It’s in Brazilian Portuguese, unfortunately, to address the issue of gender online violence co-created with adolescents in Brazil in partnership with UNICEF Brazil. It combines legal guidance… real-life cases and interactive activities is a tool to foster dialogue in classroom and with youth groups. And the type of research helps to equip youth with the tools to understand their rights, recognize harm, and build a culture of respect online. So thank you very much.


Kenneth Leung: Thank you so much, Juliana. And thank you for bringing us the Brazilian case and highlighting how encrypted platforms and services were misused, not only to disseminate CSAM, but also creating a CSAM economy on the platform, which is very worrying. And I also like how Juliana and Janice have been stressing the importance of cross-platform and cross-sectoral collaboration in combating sexual defects. So now I will open up the floor to everyone. And we’d love to hear your statement, your questions, and your comments, both here and online. And you will have two minutes to have your statements. And please state your name and affiliation when you do so, please. Yeah, please head to the mic in front of you. Thank you.


Participant: Okay, I just understood this. Can you hear me? Okay, great. Hello. Great to meet you all. And thank you for the great panel. My name is Mariana. I am Colombian. I work at the DataSphere Initiative. Very happy to hear the Brazilian case. I work and I’m leading one of our big, big projects that’s actually very close to my heart, which is called the Youth for a Data Future Project. We’ve been engaging young people in different parts of the world, pretty much in the conversation around data governance. And right now we’re wrapping up that project. And we’re starting to shape an initiative focused on influencers. So one of my big questions, and I think it’s for all of you actually, is if you’ve touched on this topic, and how have you started thinking about the role that not only influencers themselves, as adults that are influencing the online space, but young people themselves. And in Brazil, there’s a very interesting case around kid fluencers. So four or five-year-olds that are famous because of their parents’ influence, and they’re put online, and they’re sharing content and creating content themselves, or parents creating content with them. So one of my questions is, how do you see this interplay? And what’s the role of the influencer industry in shaping safety online? And have you thought about any kind of actions that could help us build a safer and more inclusive internet? And there, I just close my question. And this is more of an invitation to anybody who’s interested in the topic. I’ll be very excited to talk about this later after the workshop’s over, because I’m very excited about the work that you’re all doing. That’s it. Thank you.


Kenneth Leung: Thank you so much. And I’m seeing there’s a long queue, and we do want to get through every statement. So I would suggest to have maybe three statements or interventions at a time, and then the panel will address it collectively, if that’s all right. But thank you very much for your first question. Thank you.


Participant: Hi, my name is Frances. I’m part of Youthdig. And I just have two questions. The first, I think, Janice, you highlighted this. Why not just have a real crackdown on these kind of applications? Because I think a big reason why young children are using these apps is because it’s just very easy, right? So it’s easy to access them. You just download them. And app stores providing these kind of applications, whether it’s advertised as being notifying or not explicitly, necessarily means that young children think it’s the same as downloading Instagram or Snake Game or things like this. Thank you. So the first thing that you’re sending to young people if it’s allowed on the App Store is that this is appropriate and it’s equivalent to other fun apps, right? So why can’t we just outright ban it? I think this is something that charities in the UK are really trying to push now. And what difficulty do you have with trying to draw the line about what apps you do ban, depending on what kind of tools they allow young people or people in general to have. And then the second thing I just wanted to raise was, I think I very much agree about education and societal and attitudinal change. This is a problem about gender-based violence and it’s just another form of it, right? And I think in young children, we’re talking about deepfakes today, but I also think educating young children, perhaps especially a young woman, that when you share even real content with people who you think are trusted individuals as a young person, which it’s easy to feel this way if you’re having a relationship online or quasi-online, then it’s very important that these young people know that the content that gets shared online is always online. So even when you have WhatsApp as end-to-end encrypted and you can send these one-time images, that they’re never one-time images and this technology is very different to in-person relationships. So I would also say that. Thank you.


Robbert Hoving: Good morning, Robert Hoving from the Safer Internet Centre in the Netherlands of Limits and also of InHope. As Janice already shared our numbers. Thank you very much. I won’t go into my statement. But I was wondering two things. I went just sitting here. I went to Google, I went to Bing and I went to DuckDuckGo. And when you put in best notify apps, you just get them. So that’s how easy it is. A global legislation answer might be difficult. We’ve seen it with Mr. Deepfakes. In the Netherlands, deepfakes are both criminalized and also according to privacy law. But then you can use a VPN and you act like you come from another country. So that’s difficult. But I think you might do something with the search engine. curious about your vision on that. Second part is, as it’s so difficult to have a legal global answer, I do think, but I’m a kind of glass-half-full kind of guy, it is possible to have a more global awareness answer. Very curious how you look at these two questions. Thank you very much.


Kenneth Leung: Thank you very much. Since we have three speakers for intervention ready, I guess that we can address it very succinctly and then move on. But yeah, I would love your responses to those three interventions.


Yi Teng Au: I’ll address the question regarding why can’t we ban apps that create deepfakes and the similar question about such results. So actually, in many countries, the laws against deepfakes is very muddy because it’s a picture that is modified. Although it’s a picture of a face, the body, it’s not. So a lot of the times, it’s very hard to persecute the perpetrators. As well, there’s many societal issues as well. So in South Korea, there was an anecdote from a teacher whereby in Incheon, there was a photo, a sexual deepfake of a photo uploaded to the ex-formerly called Twitter. The perpetrator, she found out, was her student, but she had to find the student herself because the police did nothing. Then there is also many, in short, the law needs to improve to actually be effective in punishing those perpetrators.


Janice Richardson: I’d like to quickly respond to a couple of points. Influences. Kids are actually sick of them. So I think that maybe this is a trend which is not growing, which perhaps will start dwindling out, but once again it’s up to education to show that there are real places and real people who understand their subject and let’s go to those people and not to someone trying to sell me a product education. Secondly, yeah I asked the same question but in fact we come here we talk and then we don’t do anything about it and if people like us came together formed a group, started putting pressure on the search engines to delete access to these type of apps, then I think we may get somewhere. It’s very easy for industry, they have the money, they’re getting our data, they’re seeing where people are going, but why don’t people join together, why don’t people make an international action against this. So that’s my call, let’s keep working on it and yeah, once online, always online, there’s nothing more that we can say except teach our children from age one, two, three. Internet’s great to explore but they have to know that what’s mine is mine and my privacy is the most precious possession I have. About the influencers in Brazil, they play a key


Juliana Cunha: role in the culture especially because we have big influencers right now, we are discussing related to gambling for example and some influencers went to public hearing to explain all the model, the business model related to publicize gaming apps, gambling apps sorry, but of course In SaferNet, we have, on the other hand, an experience involving influencers in campaigns. It’s very interesting because it’s an effective way to communicate with young people. So you have the two sides of the coin, the influencers that gain millions and publicizing, for example, bad products. On the other hand, we have influencers that is a role model for many young people. And I think you have to take advantage of how to avoid this, and regulators have the challenge to regulate influencers in Brazil. But on the other hand, it’s important to involve them. And about the other question, I think, technically, is it possible? But of course, there is challenges, for example, how these companies are bypassing the example of the meta. Even when you ban this kind of advertising, companies try to bypass the rules. So I think the challenge is how to technically implement it and anticipate ways that these players can try to bypass. Of course, the awareness. I really think that if it’s rooted in the culture, it’s really hard. Of course, we can do better in terms of awareness as a global trend. It’s a new trend. Many people in Brazil… don’t aware about it, but I think that the culture plays an important role, so the awareness maybe can address this culture, because in Brazil especially, we have the spike of human violence, so it’s very hard to change this reality.


Kenneth Leung: Right, thank you so much for the responses. I would suggest, because we are sort of closing in for the session, but we still want to get through, so if everyone can have like maybe just one minute for your intervention, and our panelists do a very quick response, that would be gorgeous. So again, please, you have the floor for one minute, thanks.


Maciej Gron: Thank you very much. My name is Maciej Gronia, I’m from the Polish Research Institute NASK and also the hotline dujournet.pl. First of all, I would like to thank you very much for organizing this session, it’s very important. The deepfake phenomena is not something new, but you know, the scale and accessibility is completely new, and it makes a big difference, especially these new apps for addressing people, they are really democratizing this technical science, which was very difficult to make, you know, these things 20 years ago, and even two years ago, or three, now it’s accessible on a phone, so I’m the lawyer, so I can say about our regulation system. Now it’s very difficult, because we have a completely new situation, you know, the adults are… are transformed into adults and there are new situations which are not easy to tackle especially in the formal process in the court so this is the big challenge also the big challenge is that the victim and perpetrator can be the children and for example in our law system there is something like a public pedophile register and it’s not true that the children are not pedophiles so we have to change the law, we are in the process of preparing the new part of the criminal code and change some new regulations Thank you very much


Kenneth Leung: Can we have first this gentleman for one minute?


Andrew Campling: Yes, surely, we’ll be brief Hi, my name is Andrew Campling, I’m a trustee with the Internet Watch Foundation Three very quick points, I put in the chat a link where people can report illegal images including AI generated which you can report anonymously and then they’re added to our block list if they’re deemed to be illegal and that will prevent through many services them being shared further which is at least progress and that includes by the way on the public spaces on Telegram which has recently joined the IWF after its CEO was placed under house arrest in France Question for the panel, I think the biggest issue we have is with end-to-end encrypted messaging services you can technically block, prevent the sharing of CSAM on those services but it needs technical action by the service operators, it doesn’t break encryption but I think that’s where we should focus our attention because we know that that’s where CSAM including deepfake generated images is very widely shared. There’s lots of research that shows that. Thank you.


Kenneth Leung: Thank you so much. I guess we can run through all the questions and then panelists can respond with the final remarks. Please, go ahead.


Torsten Krause: Yeah, hello, my name is Torsten Krause. I’m affiliated with the Digital Opportunities Foundation based in Germany. I would like to touch on two points. It was mentioned that pictures and photos of pupils, members of sports communities and so on, part of the challenge. And I get this point, I understand it, but I’m wondering what will be the solution because I will not imagine an internet and digital environment without photos, experiences of children and young persons online because they also have the right to see their experience reflected in the internet and to find, yeah, a connection to themselves. So what will be the solution on that? Secondly, it’s connected to what Andrew Kempling asked. In the European Union, we are discussing since three years if it should be allowed or if it should be okay to compromise privacy and detect CSAM known and unknown and also grooming in encrypted environments. And I would like to know maybe, especially from the youth ambassadors, what’s your perspective on that? Thanks.


Participant: Sorry. Hi, my name is Claire and I’m a student from Hong Kong. I was wondering, excluding through education and policy and implementation, what are some preventative measures for students in combating deepfakes? Thank you. Hello, I’m Sana and I’m from NetMission. So my question is, given the rapid advancement of AI tool that can generate hyper-realistic images, how can we improve the quality of the images? And how can we improve the quality of the images? And how can we improve the quality of the images? Thank you. What kind of detection system or proactive mechanism are currently being developed, especially when we are talking about the NGOs and also when we are discussing the Koreans rapidly increasing those video viraling and images viraling. So at that point, I just have a question like, should we prioritize to identify and stop the spread of sexual defects targeting minors before they go viral? Thank you.


Yuri Bokovoy: Hi, I’m Yuri Bokovoy from the Finnish Green Party. My first question is specifically to Janice. It’s about how to improve awareness of these issues, especially in education. When the government’s rotary legislators in education departments are, at least in our country, quite often made up of quite conservative voices, which dismiss these issues as harmless. And the second question is more general about, we’ve seen a lot of these regulations that are supposed to protect children be misused by more authoritarian governments to silence free speech and expression elsewhere, most recently today and yesterday in Hungary. What can we do to safeguard against misuse of these regulations that are supposed to protect children?


Kenneth Leung: Thank you so much. And we actually have one last comment on Zoom, I’ll just read it out. So it’s a statement and she wants to chime in on the Korea case and if there’s any innovation measures mentioned in the slide that are truly effective. So I guess in the next 30 seconds, everyone can make their remarks and conclude the session.


Juliana Cunha: Yes, I have no time, but I’d like to thank… Thank you, and I’m happy to talk to everyone when to continue the conversation about the panel. Thank you very much.


Janice Richardson: If we’re so clever with technology, why can’t we make something that when once we’ve put an image online, it becomes indelible, it becomes unchangeable. I’d like more efforts from the tech industry, improving awareness. There’s a great initiative in Latin America where they actually educate the ministers, the ministers of education, so that we get not only a bottom-up from awareness raisers, but a top-down from the people who are meant to be there, who are elected to look after us.


Yi Teng Au: For me, the problem I think I really feel is the accessibility of these tools. For example, you have like, even if we outright have safeguards, for example, check GPT in like, against this sexually explicit content, people who really are into it would find ways to download offline models such as stable division. So the real thing is to how do we educate enough and make the accessibility of creating such content not as accessible.


Ji Won Oh: Okay, in closing, Defects X Crimes, another global problem. Before starting to study internet governance, I doubt that I actually had a big role, but now I’m thinking that everyone has a huge role. At studying public science, I think the political and institutions expect need to be supplemented. But I think that opinions are various, like there are all importance for protect something. And Defects X Crimes may start online, but they destroy real lives. So let’s face the problems together, not just with punishment, but with prevention, awareness, and empathy. Thank you.


Kenneth Leung: Thank you so much. And I guess we will be concluding this session, but we’d love to have more of your thoughts. So, if you can, please comment on this platform And once again, I guess the concluding takeaway is we must act together in order to combat sexual defects and to be good citizens Thank you very much and I hope you enjoy the rest of your day Thank you Thank you very much


J

Ji Won Oh

Speech speed

131 words per minute

Speech length

778 words

Speech time

356 seconds

Korean schools experienced widespread deepfake sexual videos affecting hundreds of schools with secret Telegram chat rooms

Explanation

In August of the previous year, hundreds of secret chat rooms on Telegram were discovered sharing deepfake sexual videos using real faces of students from elementary, middle, and high schools. This incident shocked the entire country and demonstrated the widespread nature of the problem affecting Korean educational institutions.


Evidence

Hundreds of secret chat rooms on Telegram sharing deepfake sexual videos of students from elementary, high, and middle schools discovered in August


Major discussion point

Scale and Impact of Sexual Deepfakes Targeting Minors


Topics

Cybersecurity | Human rights | Sociocultural


Deepfake crime reports in Korea increased seven-fold from 156 cases in 2021 to 1,202 cases in 2024

Explanation

The dramatic increase in deepfake-related sex crimes in Korea shows the rapidly escalating nature of this problem. The seven-fold increase over just three years indicates that this is becoming a major societal issue requiring urgent attention and intervention.


Evidence

156 police reports about deepfake sex crimes in 2021 versus 1,202 cases in 2024


Major discussion point

Scale and Impact of Sexual Deepfakes Targeting Minors


Topics

Cybersecurity | Human rights | Legal and regulatory


Many students don’t understand seriousness and think deepfake creation is funny or harmless

Explanation

Students often view deepfake creation as entertainment rather than understanding the serious harm it causes to victims. This lack of awareness about the consequences contributes to the continued spread of the problem, as perpetrators don’t recognize the pain and trauma they inflict on others.


Evidence

Students think it’s funny and laugh and share the videos without considering the pain this causes to victims


Major discussion point

Cultural and Social Factors


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Juliana Cunha
– Janice Richardson

Agreed on

The problem is rooted in deeper cultural and social issues beyond technology


Y

Yi Teng Au

Speech speed

136 words per minute

Speech length

536 words

Speech time

235 seconds

Over 500 schools affected by deepfake videos in South Korea, with 54% of offenders claiming they did it “for fun”

Explanation

A comprehensive survey of middle and high school students revealed that the majority of offenders engaged in deepfake creation for entertainment purposes. This statistic highlights the casual attitude many young people have toward what is actually a serious form of abuse and demonstrates a lack of awareness about consequences.


Evidence

December survey of 2,145 middle and high schoolers showed 54% said offenders did it for fun, with other reasons including curiosity or thinking consequences were minor


Major discussion point

Scale and Impact of Sexual Deepfakes Targeting Minors


Topics

Cybersecurity | Human rights | Sociocultural


Korean Ministry of Education published five guidebooks tailored to different age groups addressing deepfakes

Explanation

The Korean government responded to the deepfake crisis by creating comprehensive educational materials designed for different developmental stages. These guidebooks cover various scenarios including being a victim, witnessing abuse, or causing harm, providing practical guidance for students, teachers, and caregivers.


Evidence

Five guidebooks published in April: cartoony version for elementary students, separate editions for middle and high schoolers, teachers and caregivers, covering three key situations


Major discussion point

Educational and Prevention Strategies


Topics

Sociocultural | Human rights | Legal and regulatory


Agreed with

– Ji Won Oh
– Juliana Cunha
– Janice Richardson

Agreed on

Education and awareness are critical components of prevention


Platform hopping phenomenon where perpetrators shift from Telegram to other platforms making detection harder

Explanation

As law enforcement and platforms crack down on deepfake crimes on popular platforms like Telegram, perpetrators adapt by moving to other platforms with less monitoring. This creates an ongoing challenge for authorities as criminals stay ahead of enforcement efforts by constantly changing their methods and locations.


Evidence

Perpetrators shifted from Telegram to other platforms like DC Insight forum as Telegram began cooperating with South Korean authorities, making detection harder due to lower data logging


Major discussion point

Technical Challenges and Platform Issues


Topics

Cybersecurity | Legal and regulatory | Infrastructure


Agreed with

– Juliana Cunha
– Andrew Campling
– Robbert Hoving

Agreed on

Platform cooperation and technical solutions are insufficient


Korean DeepFake dataset with 2.73 terabytes achieved 96% accuracy in detecting altered videos

Explanation

South Korea developed a comprehensive dataset specifically for detecting deepfakes in Korean content, addressing the lack of Korean representation in existing detection systems. This technical innovation demonstrates how countries can develop localized solutions to improve detection capabilities for their specific linguistic and cultural contexts.


Evidence

Korean DeepFake dataset released in 2021 contains 2.73 terabytes of photos and videos, Seoul National University students achieved 96% accuracy in detecting altered videos


Major discussion point

Technical Innovation and Detection


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Seoul Metropolitan Government developed detection tool identifying school items, uniforms, and youth slang

Explanation

The Seoul government created specialized detection technology that goes beyond facial recognition to identify content involving minors. This tool can flag underage content even when faces aren’t visible by recognizing contextual clues like school uniforms, educational materials, and language patterns specific to young people.


Evidence

Detection tool enhanced in May 2025 identifies schoolbooks, uniforms, dolls, youth slang, scans search terms and drafts multilingual reports depending on site’s host country


Major discussion point

Technical Innovation and Detection


Topics

Cybersecurity | Infrastructure | Human rights


Laws against deepfakes are unclear because modified images create legal ambiguity about prosecution

Explanation

The legal framework for prosecuting deepfake crimes is complicated because the technology creates images that combine real faces with fabricated bodies. This technical distinction creates challenges for law enforcement and prosecutors who struggle to apply existing laws to these hybrid digital creations.


Evidence

Laws are muddy because it’s a picture that is modified – face is real but body is not, making it hard to prosecute perpetrators; anecdote of teacher in Incheon who had to find the student perpetrator herself because police did nothing


Major discussion point

Legal and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity | Human rights


Agreed with

– Kenneth Leung
– Maciej Gron

Agreed on

Legal frameworks are inadequate and need strengthening


Disagreed with

– Frances (Participant)
– Janice Richardson

Disagreed on

Approach to banning deepfake creation applications


Accessibility of AI tools like stable diffusion makes content creation too easy despite safeguards

Explanation

Even when platforms implement safeguards against sexually explicit content creation, determined users can circumvent these protections by downloading offline AI models. The widespread availability of these tools means that technical restrictions alone are insufficient to prevent abuse.


Evidence

People bypass safeguards like ChatGPT restrictions by downloading offline models such as stable diffusion


Major discussion point

Technical Challenges and Platform Issues


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Need for countries to develop datasets tailored to local needs for effective detection

Explanation

The Korean experience demonstrates that detection systems work better when trained on locally relevant data that reflects the specific linguistic, cultural, and visual characteristics of each region. Countries without such specialized datasets should consider developing their own to improve detection accuracy for their populations.


Evidence

Korean dataset was created to address lack of Korean representation in existing datasets, suggesting other countries might consider developing datasets tailored to their local needs


Major discussion point

Technical Innovation and Detection


Topics

Infrastructure | Cybersecurity | Development


J

Juliana Cunha

Speech speed

91 words per minute

Speech length

1350 words

Speech time

880 seconds

Brazil experienced historic spike in child sexual abuse material reports in 2023, with AI-generated content as key factor

Explanation

SaferNet Brazil documented an unprecedented increase in reports of child sexual abuse material, with AI tools being a significant contributing factor. The rise was particularly attributed to young people using AI applications and bots to create and share fake nude images of their classmates, representing a new form of abuse.


Evidence

Historic spike in reports related to child sexual abuse material online in 2023, with rising use of AI tools like notifying apps and bots by young people to generate and share fake nudes of classmates


Major discussion point

Scale and Impact of Sexual Deepfakes Targeting Minors


Topics

Cybersecurity | Human rights | Legal and regulatory


Telegram accounts for 90% of messaging app reports in Brazil with limited cooperation with law enforcement

Explanation

The vast majority of reports involving messaging applications in Brazil are related to Telegram, which has shown minimal cooperation with authorities. This lack of collaboration hampers law enforcement efforts and allows illegal content to persist on the platform for extended periods.


Evidence

90% of reports related to messaging apps in Brazil involve Telegram, with WhatsApp and Signal together accounting for remaining 10%; Telegram limits response to deleting evidence with little cooperation with law enforcement


Major discussion point

Technical Challenges and Platform Issues


Topics

Cybersecurity | Legal and regulatory | Infrastructure


Agreed with

– Yi Teng Au
– Andrew Campling
– Robbert Hoving

Agreed on

Platform cooperation and technical solutions are insufficient


Issue reflects broader systemic gender inequalities requiring cultural prevention beyond legal measures

Explanation

The misuse of AI to create sexualized images is not merely a technical or legal problem but reflects deeper cultural issues around gender-based violence and social norms. Effective prevention requires long-term cultural interventions focused on education and changing attitudes, not just punishment and regulation.


Evidence

Misuse of AI to create sexualized images is reflection of broader systemic gender inequalities; prevention must be cultural with long-term interventions focused on education, awareness, digital literacy in schools where social norms are formed


Major discussion point

Cultural and Social Factors


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Ji Won Oh
– Janice Richardson

Agreed on

The problem is rooted in deeper cultural and social issues beyond technology


Coordinated response needed bringing together tech companies, public institutions, society and academia

Explanation

Addressing the deepfake problem requires collaboration across multiple sectors rather than isolated efforts. This multi-stakeholder approach should include stronger industry accountability, better platform safeguards, and child-centered innovation that prioritizes human rights protection.


Evidence

Need coordinated multi-stakeholder approach bringing together tech companies, public institutions, society and academia; example of META taking action against company circumventing advertising rules to promote notifying apps


Major discussion point

Multi-stakeholder Collaboration Needs


Topics

Legal and regulatory | Infrastructure | Human rights


Agreed with

– Janice Richardson

Agreed on

Multi-stakeholder collaboration is essential for addressing sexual deepfakes


Brazil created educational resources combining legal guidance with real-life cases and interactive activities

Explanation

SaferNet Brazil developed comprehensive educational materials in partnership with UNICEF that address gender-based online violence through practical, interactive approaches. These resources were co-created with adolescents to ensure relevance and effectiveness in fostering classroom dialogue and building respectful online culture.


Evidence

Educational resource addressing gender online violence co-created with adolescents in partnership with UNICEF Brazil, combining legal guidance, real-life cases and interactive activities as tool for classroom dialogue


Major discussion point

Educational and Prevention Strategies


Topics

Sociocultural | Human rights | Development


Agreed with

– Ji Won Oh
– Yi Teng Au
– Janice Richardson

Agreed on

Education and awareness are critical components of prevention


Problem rooted in culture of gender-based violence making awareness campaigns challenging

Explanation

In Brazil, the deepfake issue is compounded by existing high levels of gender-based violence in society, making it particularly difficult to address through awareness campaigns alone. The cultural normalization of violence against women creates additional barriers to changing attitudes and behaviors around digital abuse.


Evidence

In Brazil, spike of human violence makes it very hard to change reality; culture plays important role so awareness maybe can address this culture


Major discussion point

Cultural and Social Factors


Topics

Human rights | Sociocultural | Legal and regulatory


J

Janice Richardson

Speech speed

132 words per minute

Speech length

1431 words

Speech time

650 seconds

Netherlands helpline reported about 5% of 6,500 cases involved AI-generated sexual fake profiles

Explanation

Data from the Netherlands’ Safer Internet Centre shows that AI-generated sexual content is already a measurable portion of reported cases, though still in early stages. The relatively small percentage suggests the problem may grow significantly as the technology becomes more accessible and awareness increases.


Evidence

Figures from Netherlands showing about 5% of 6,500 cases over past year were AI-generated sexual fake profiles, noting it’s early days and will get worse


Major discussion point

Scale and Impact of Sexual Deepfakes Targeting Minors


Topics

Cybersecurity | Human rights | Legal and regulatory


Cross-platform collaboration lacking with companies needing to share knowledge more effectively

Explanation

While some initiatives like Project Lantern exist for companies to share information about harmful content, the collaboration remains insufficient. The problem spans multiple platforms and layers of the internet, requiring more comprehensive industry cooperation to be effectively addressed.


Evidence

Projects like Lantern where social media companies share knowledge exist, but problem is cross-platform and happens in many different layers; until industry joins up, solution will be difficult


Major discussion point

Multi-stakeholder Collaboration Needs


Topics

Infrastructure | Legal and regulatory | Cybersecurity


Agreed with

– Juliana Cunha

Agreed on

Multi-stakeholder collaboration is essential for addressing sexual deepfakes


Industry should partner with education in curriculum development rather than just providing tools

Explanation

Technology companies should move beyond simply supplying educational tools or creating superficial educational content to becoming genuine partners in developing curricula. This deeper collaboration would enable the use of real-life cases and resources in ways that would be more impactful for learning about digital safety.


Evidence

Industry shouldn’t just supply tools or push tools onto education sector and throw out bits of education, but should be there when developing curricula, finding ways to use real-life cases and resources for more impactful learning


Major discussion point

Multi-stakeholder Collaboration Needs


Topics

Sociocultural | Infrastructure | Development


Agreed with

– Juliana Cunha

Agreed on

Multi-stakeholder collaboration is essential for addressing sexual deepfakes


Morocco trained teachers in cascade model with two resource persons per school for rapid issue escalation

Explanation

Morocco implemented a systematic approach to teacher training where educators from all 86 regions receive specialized training on deepfake issues and digital safety. The cascade model ensures that every school has at least two trained resource persons who can quickly address problems and provide regular updates to staff.


Evidence

Teachers from all 86 regions of Morocco trained in cascade model with aim of having two teachers who fully understand the issue and receive regular updates in every school for fast escalation


Major discussion point

Educational and Prevention Strategies


Topics

Sociocultural | Development | Human rights


Agreed with

– Ji Won Oh
– Yi Teng Au
– Juliana Cunha

Agreed on

Education and awareness are critical components of prevention


Netherlands provides six-week courses on health and technology during elementary to middle school transition

Explanation

The Netherlands has implemented a structured educational approach that recognizes the critical transition period when students move from elementary to secondary school. During this vulnerable time, students receive comprehensive six-week courses covering health, technology, and related safety issues to prepare them for increased digital independence.


Evidence

Every school in Netherlands has freedom to choose approach, but when young person goes from elementary to middle school, they go through six-week course on health, technology, and related issues


Major discussion point

Educational and Prevention Strategies


Topics

Sociocultural | Human rights | Development


Agreed with

– Ji Won Oh
– Yi Teng Au
– Juliana Cunha

Agreed on

Education and awareness are critical components of prevention


Scandinavia uses magicians to deliver educational content making learning fun and non-threatening

Explanation

An innovative educational approach in Scandinavia combines entertainment with education by training magicians to deliver digital safety messages. This method engages students through magic tricks that reinforce educational content, making the learning experience enjoyable while avoiding the threatening or preachy tone that might cause students to disengage.


Evidence

Innovative project in Scandinavia supported by Huawei where magician is trained to deliver educational content through tricks that reinforce messages in one-hour school sessions, making it fun so kids are interested but don’t feel threatened


Major discussion point

Educational and Prevention Strategies


Topics

Sociocultural | Development | Human rights


Reporting challenges exist due to humiliation, especially in countries like Morocco and Tunisia

Explanation

Cultural factors create significant barriers to reporting deepfake abuse, particularly in more conservative societies where victims may face additional stigma. The humiliation associated with having to report that intimate images have been shared creates a major obstacle to seeking help and justice.


Evidence

Big problem with reporting due to humiliation if you have to report that an image of you has been shared, especially in countries like Morocco or Tunisia


Major discussion point

Cultural and Social Factors


Topics

Human rights | Sociocultural | Legal and regulatory


Young people indicate adults need education as they are often the ones suffering and lacking solutions

Explanation

Students have identified that adults, including parents, teachers, and other authority figures, often lack understanding of digital issues and are ill-equipped to help when problems arise. This creates a situation where young people, who are most affected by these issues, cannot get adequate support from the adults who should be protecting them.


Evidence

Young people say adults should be educated about this because very often, adults are the ones suffering, the ones doing this, and the ones who don’t have opportunity to tackle it


Major discussion point

Cultural and Social Factors


Topics

Sociocultural | Human rights | Development


Agreed with

– Ji Won Oh
– Juliana Cunha

Agreed on

The problem is rooted in deeper cultural and social issues beyond technology


International action needed with people joining together to pressure search engines and platforms

Explanation

Individual discussions and conferences are insufficient without coordinated action to pressure technology companies to remove access to harmful applications. Richardson calls for people to organize internationally and actively lobby search engines and platforms rather than just talking about the problems.


Evidence

People come to conferences and talk but don’t do anything; if people came together, formed a group, started putting pressure on search engines to delete access to these apps, then may get somewhere


Major discussion point

Multi-stakeholder Collaboration Needs


Topics

Legal and regulatory | Infrastructure | Cybersecurity


Disagreed with

– Frances (Participant)
– Yi Teng Au

Disagreed on

Approach to banning deepfake creation applications


Technology should make images indelible and unchangeable once posted online

Explanation

Richardson argues that if the technology industry is truly advanced, it should be able to create systems that prevent images from being altered once they are posted online. This would address the root technical problem that enables deepfake creation by making source images immutable.


Evidence

If we’re so clever with technology, why can’t we make something that when once we’ve put an image online, it becomes indelible, unchangeable


Major discussion point

Technical Innovation and Detection


Topics

Infrastructure | Cybersecurity | Legal and regulatory


K

Kenneth Leung

Speech speed

138 words per minute

Speech length

1459 words

Speech time

633 seconds

UK’s 2025 Data Use and Access Act criminalized creation of intimate images but only covers adults

Explanation

The UK has implemented new legislation that makes it illegal to create or request the creation of intimate images without consent, but this protection is limited to adults. The government’s rationale is that similar behaviors targeting minors are already covered by existing laws, though questions remain about whether this is sufficient protection.


Evidence

UK’s 2025 Data Use and Access Act became effective with provision criminalizing creation and requesting creation of purported intimate images, but additional safeguards only apply to adults as behaviors targeting minors already covered by law


Major discussion point

Legal and Regulatory Responses


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Yi Teng Au
– Maciej Gron

Agreed on

Legal frameworks are inadequate and need strengthening


A

Andrew Campling

Speech speed

151 words per minute

Speech length

178 words

Speech time

70 seconds

End-to-end encrypted messaging services present biggest challenge for blocking CSAM sharing

Explanation

While it’s technically possible to prevent the sharing of child sexual abuse material on encrypted messaging services without breaking encryption, it requires active technical implementation by service operators. This represents the most significant challenge in preventing the spread of deepfake and other illegal content.


Evidence

Can technically block and prevent sharing of CSAM on end-to-end encrypted messaging services without breaking encryption, but needs technical action by service operators; research shows this is where CSAM including deepfake images is widely shared


Major discussion point

Technical Challenges and Platform Issues


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Agreed with

– Yi Teng Au
– Juliana Cunha
– Robbert Hoving

Agreed on

Platform cooperation and technical solutions are insufficient


Disagreed with

– Torsten Krause

Disagreed on

Privacy versus child protection in encrypted communications


Internet Watch Foundation provides anonymous reporting system for illegal images including AI-generated content

Explanation

The Internet Watch Foundation offers a system where people can anonymously report illegal images, including AI-generated content, which are then added to block lists if deemed illegal. This helps prevent further sharing of harmful content across multiple services and platforms.


Evidence

Link provided for anonymous reporting of illegal images including AI-generated content, which are added to block list if illegal and prevent sharing through many services; includes public spaces on Telegram which recently joined IWF


Major discussion point

Multi-stakeholder Collaboration Needs


Topics

Cybersecurity | Legal and regulatory | Infrastructure


R

Robbert Hoving

Speech speed

188 words per minute

Speech length

186 words

Speech time

59 seconds

Search engines easily provide access to deepfake creation apps when searched

Explanation

A simple search on major search engines like Google, Bing, and DuckDuckGo for terms like “best notify apps” immediately returns results for deepfake creation tools. This demonstrates how easily accessible these harmful applications are through mainstream search platforms.


Evidence

Went to Google, Bing and DuckDuckGo and searching ‘best notify apps’ easily provides access to deepfake creation tools


Major discussion point

Technical Challenges and Platform Issues


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Agreed with

– Yi Teng Au
– Juliana Cunha
– Andrew Campling

Agreed on

Platform cooperation and technical solutions are insufficient


M

Maciej Gron

Speech speed

122 words per minute

Speech length

228 words

Speech time

112 seconds

Polish legal system faces challenges with new situations involving child victims and perpetrators

Explanation

Poland’s legal framework struggles to address cases where both victims and perpetrators are children, creating unprecedented legal situations. The existing system, including mechanisms like public pedophile registers, wasn’t designed for cases involving minors as perpetrators, requiring new legal approaches and criminal code modifications.


Evidence

Completely new situation where victim and perpetrator can be children creates new situations not easy to tackle in formal court process; public pedophile register exists but children are not pedophiles, so law needs changing with new criminal code in preparation


Major discussion point

Legal and Regulatory Responses


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Yi Teng Au
– Kenneth Leung

Agreed on

Legal frameworks are inadequate and need strengthening


T

Torsten Krause

Speech speed

141 words per minute

Speech length

172 words

Speech time

72 seconds

European Union discussing compromising privacy to detect CSAM in encrypted environments for three years

Explanation

The European Union has been engaged in ongoing discussions about whether it should be permissible to compromise privacy protections in order to detect child sexual abuse material, including unknown content and grooming activities, within encrypted communication environments. This debate highlights the tension between privacy rights and child protection.


Evidence

European Union discussing for three years whether it should be allowed to compromise privacy and detect CSAM known and unknown and grooming in encrypted environments


Major discussion point

Legal and Regulatory Responses


Topics

Legal and regulatory | Human rights | Cybersecurity


Disagreed with

– Andrew Campling

Disagreed on

Privacy versus child protection in encrypted communications


Y

Yuri Bokovoy

Speech speed

109 words per minute

Speech length

114 words

Speech time

62 seconds

Regulations protecting children risk misuse by authoritarian governments to silence free speech

Explanation

Well-intentioned regulations designed to protect children from online harms can be exploited by authoritarian governments as tools for censorship and suppressing free expression. Recent examples in Hungary demonstrate how child protection laws can be misused to silence legitimate speech and expression.


Evidence

Regulations supposed to protect children misused by authoritarian governments to silence free speech and expression, most recently in Hungary


Major discussion point

Legal and Regulatory Responses


Topics

Legal and regulatory | Human rights | Sociocultural


P

Participant

Speech speed

171 words per minute

Speech length

790 words

Speech time

276 seconds

Detection systems and proactive mechanisms needed to stop spread before content goes viral

Explanation

Given the rapid advancement of AI tools that can generate hyper-realistic images and the speed at which harmful content can spread online, there is an urgent need for detection systems and proactive mechanisms that can identify and stop sexual deepfakes targeting minors before they achieve viral distribution.


Major discussion point

Technical Innovation and Detection


Topics

Cybersecurity | Infrastructure | Human rights


Agreements

Agreement points

Multi-stakeholder collaboration is essential for addressing sexual deepfakes

Speakers

– Juliana Cunha
– Janice Richardson

Arguments

Coordinated response needed bringing together tech companies, public institutions, society and academia


Cross-platform collaboration lacking with companies needing to share knowledge more effectively


Industry should partner with education in curriculum development rather than just providing tools


Summary

Both speakers emphasize that effective solutions require coordinated efforts across multiple sectors including technology companies, educational institutions, government agencies, and civil society organizations rather than isolated approaches.


Topics

Legal and regulatory | Infrastructure | Human rights


The problem is rooted in deeper cultural and social issues beyond technology

Speakers

– Ji Won Oh
– Juliana Cunha
– Janice Richardson

Arguments

Many students don’t understand seriousness and think deepfake creation is funny or harmless


Issue reflects broader systemic gender inequalities requiring cultural prevention beyond legal measures


Young people indicate adults need education as they are often the ones suffering and lacking solutions


Summary

All three speakers recognize that sexual deepfakes are not merely a technical problem but reflect deeper cultural attitudes about gender, consent, and digital behavior that require long-term educational and cultural interventions.


Topics

Human rights | Sociocultural | Legal and regulatory


Education and awareness are critical components of prevention

Speakers

– Ji Won Oh
– Yi Teng Au
– Juliana Cunha
– Janice Richardson

Arguments

Korean Ministry of Education published five guidebooks tailored to different age groups addressing deepfakes


Brazil created educational resources combining legal guidance with real-life cases and interactive activities


Morocco trained teachers in cascade model with two resource persons per school for rapid issue escalation


Netherlands provides six-week courses on health and technology during elementary to middle school transition


Summary

All speakers agree that comprehensive educational approaches tailored to different age groups and contexts are essential for preventing sexual deepfake abuse and building digital literacy.


Topics

Sociocultural | Human rights | Development


Legal frameworks are inadequate and need strengthening

Speakers

– Yi Teng Au
– Kenneth Leung
– Maciej Gron

Arguments

Laws against deepfakes are unclear because modified images create legal ambiguity about prosecution


UK’s 2025 Data Use and Access Act criminalized creation of intimate images but only covers adults


Polish legal system faces challenges with new situations involving child victims and perpetrators


Summary

Multiple speakers highlight that existing legal frameworks are insufficient to address the complexities of deepfake crimes, particularly when involving minors as both victims and perpetrators.


Topics

Legal and regulatory | Human rights | Cybersecurity


Platform cooperation and technical solutions are insufficient

Speakers

– Yi Teng Au
– Juliana Cunha
– Andrew Campling
– Robbert Hoving

Arguments

Platform hopping phenomenon where perpetrators shift from Telegram to other platforms making detection harder


Telegram accounts for 90% of messaging app reports in Brazil with limited cooperation with law enforcement


End-to-end encrypted messaging services present biggest challenge for blocking CSAM sharing


Search engines easily provide access to deepfake creation apps when searched


Summary

Speakers agree that current platform responses are inadequate, with limited cooperation from some platforms and technical challenges in encrypted environments allowing harmful content to persist and spread.


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Similar viewpoints

Both speakers provide complementary data about the scale of the deepfake crisis in Korean schools, emphasizing how widespread the problem has become and the casual attitude of perpetrators.

Speakers

– Ji Won Oh
– Yi Teng Au

Arguments

Korean schools experienced widespread deepfake sexual videos affecting hundreds of schools with secret Telegram chat rooms


Over 500 schools affected by deepfake videos in South Korea, with 54% of offenders claiming they did it ‘for fun’


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize that cultural factors, particularly around gender and shame, create significant barriers to addressing sexual deepfakes and require culturally sensitive approaches.

Speakers

– Juliana Cunha
– Janice Richardson

Arguments

Issue reflects broader systemic gender inequalities requiring cultural prevention beyond legal measures


Reporting challenges exist due to humiliation, especially in countries like Morocco and Tunisia


Topics

Human rights | Sociocultural | Legal and regulatory


Both speakers emphasize that the easy accessibility of deepfake creation tools is a fundamental problem requiring coordinated pressure on technology companies and platforms.

Speakers

– Yi Teng Au
– Janice Richardson

Arguments

Accessibility of AI tools like stable diffusion makes content creation too easy despite safeguards


International action needed with people joining together to pressure search engines and platforms


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Unexpected consensus

Technical innovation can be part of the solution

Speakers

– Yi Teng Au
– Janice Richardson

Arguments

Korean DeepFake dataset with 2.73 terabytes achieved 96% accuracy in detecting altered videos


Seoul Metropolitan Government developed detection tool identifying school items, uniforms, and youth slang


Technology should make images indelible and unchangeable once posted online


Explanation

Despite criticism of technology companies, there was unexpected consensus that technical solutions and innovations can be part of addressing the problem, with examples of successful detection systems and calls for better technical safeguards.


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Young people should be central to developing solutions

Speakers

– Juliana Cunha
– Janice Richardson

Arguments

Brazil created educational resources combining legal guidance with real-life cases and interactive activities


Young people indicate adults need education as they are often the ones suffering and lacking solutions


Explanation

There was unexpected consensus that young people should not just be protected but should be actively involved in creating solutions, with recognition that they often understand the problems better than adults.


Topics

Sociocultural | Human rights | Development


Overall assessment

Summary

Speakers demonstrated strong consensus on the need for multi-stakeholder collaboration, the cultural roots of the problem, the importance of education, inadequacy of current legal frameworks, and insufficient platform cooperation. There was also unexpected agreement on the potential for technical solutions and the importance of youth involvement in developing responses.


Consensus level

High level of consensus across all major aspects of the issue, suggesting a mature understanding of the problem’s complexity and the need for comprehensive, coordinated responses. This strong agreement among diverse stakeholders from different regions and sectors provides a solid foundation for developing effective global strategies to combat sexual deepfakes targeting minors.


Differences

Different viewpoints

Approach to banning deepfake creation applications

Speakers

– Frances (Participant)
– Yi Teng Au
– Janice Richardson

Arguments

Why not just have a real crackdown on these kind of applications?


Laws against deepfakes are unclear because modified images create legal ambiguity about prosecution


International action needed with people joining together to pressure search engines and platforms


Summary

Frances advocates for outright banning of deepfake apps from app stores, while Yi Teng explains legal complexities make prosecution difficult, and Janice calls for coordinated pressure on platforms rather than just bans


Topics

Legal and regulatory | Infrastructure | Cybersecurity


Privacy versus child protection in encrypted communications

Speakers

– Andrew Campling
– Torsten Krause

Arguments

End-to-end encrypted messaging services present biggest challenge for blocking CSAM sharing


European Union discussing compromising privacy to detect CSAM in encrypted environments for three years


Summary

Andrew emphasizes technical solutions within encrypted systems without breaking encryption, while Torsten raises concerns about the EU’s consideration of compromising privacy for child protection


Topics

Cybersecurity | Human rights | Legal and regulatory


Role of online image sharing by schools and institutions

Speakers

– Janice Richardson
– Torsten Krause

Arguments

Schools still have a tendency to post the face, the image of their pupils in sports, in all sorts of activities. And this is the first problem


I would not imagine an internet and digital environment without photos, experiences of children and young persons online because they also have the right to see their experience reflected in the internet


Summary

Janice views school posting of student images as problematic source material for deepfakes, while Torsten argues children have rights to digital representation and asks what the solution would be


Topics

Human rights | Sociocultural | Cybersecurity


Unexpected differences

Effectiveness of influencer involvement in prevention

Speakers

– Mariana (Participant)
– Janice Richardson
– Juliana Cunha

Arguments

What’s the role of the influencer industry in shaping safety online?


Kids are actually sick of them. So I think that maybe this is a trend which is not growing


Influencers play a key role in the culture… we have influencers that is a role model for many young people


Explanation

Unexpected disagreement emerged about influencer effectiveness, with Janice dismissing their relevance while Juliana sees them as important cultural figures, despite both being experienced educators


Topics

Sociocultural | Human rights | Development


Prioritization of technical versus cultural solutions

Speakers

– Yi Teng Au
– Juliana Cunha

Arguments

Accessibility of AI tools like stable diffusion makes content creation too easy despite safeguards


Issue reflects broader systemic gender inequalities requiring cultural prevention beyond legal measures


Explanation

Despite both being young advocates, Yi Teng emphasizes technical accessibility problems while Juliana prioritizes cultural and systemic issues, showing generational perspectives aren’t uniform


Topics

Cybersecurity | Human rights | Sociocultural


Overall assessment

Summary

Main disagreements centered on regulatory approaches (banning vs. education), privacy-security balance, and whether to restrict or protect children’s digital presence


Disagreement level

Moderate disagreement level with speakers generally aligned on problem severity but differing on solution priorities and implementation methods. This suggests need for comprehensive approaches that integrate multiple perspectives rather than choosing single solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers provide complementary data about the scale of the deepfake crisis in Korean schools, emphasizing how widespread the problem has become and the casual attitude of perpetrators.

Speakers

– Ji Won Oh
– Yi Teng Au

Arguments

Korean schools experienced widespread deepfake sexual videos affecting hundreds of schools with secret Telegram chat rooms


Over 500 schools affected by deepfake videos in South Korea, with 54% of offenders claiming they did it ‘for fun’


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize that cultural factors, particularly around gender and shame, create significant barriers to addressing sexual deepfakes and require culturally sensitive approaches.

Speakers

– Juliana Cunha
– Janice Richardson

Arguments

Issue reflects broader systemic gender inequalities requiring cultural prevention beyond legal measures


Reporting challenges exist due to humiliation, especially in countries like Morocco and Tunisia


Topics

Human rights | Sociocultural | Legal and regulatory


Both speakers emphasize that the easy accessibility of deepfake creation tools is a fundamental problem requiring coordinated pressure on technology companies and platforms.

Speakers

– Yi Teng Au
– Janice Richardson

Arguments

Accessibility of AI tools like stable diffusion makes content creation too easy despite safeguards


International action needed with people joining together to pressure search engines and platforms


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Takeaways

Key takeaways

Sexual deepfakes targeting minors is a rapidly growing global crisis, with Korea seeing a seven-fold increase in cases from 2021-2024 and Brazil experiencing historic spikes in AI-generated CSAM reports


The problem is fundamentally rooted in gender-based violence and cultural attitudes, requiring prevention strategies that go beyond legal and technical solutions


Current legal frameworks are inadequate, with laws struggling to address the ambiguity of modified images and the complexity of cases involving both child victims and perpetrators


Platform hopping and encrypted messaging services present major enforcement challenges, with perpetrators easily shifting between platforms to avoid detection


Education and digital literacy programs are essential, but must be implemented systematically with proper teacher training and age-appropriate curricula


Multi-stakeholder collaboration between governments, tech companies, educators, and civil society is critical but currently insufficient


Technical solutions like detection tools and datasets show promise but need broader implementation and local adaptation


Many young perpetrators don’t understand the seriousness of their actions, viewing deepfake creation as harmless fun rather than abuse


Resolutions and action items

Participants encouraged to share additional thoughts and resources on the csit.tk platform for one week post-workshop to synthesize discussion into a report


Call for international coalition to pressure search engines to remove access to deepfake creation apps


Recommendation for countries without detection datasets to develop ones tailored to local needs, following Korea’s example


Proposal for tech industry to develop technology making images indelible and unchangeable once posted online


Suggestion to educate government ministers and education departments from top-down while maintaining grassroots awareness efforts


Need for stronger industry accountability measures against products deliberately designed to violate rights and exploit vulnerabilities


Unresolved issues

How to balance children’s right to have their experiences reflected online with protection from image misuse


Whether to compromise privacy and encryption to detect CSAM in encrypted messaging environments


How to prevent misuse of child protection regulations by authoritarian governments to silence free speech


Effective global coordination mechanisms when legal responses vary significantly between countries


How to address conservative voices in government and education who dismiss these issues as harmless


Practical implementation of detection systems that can stop viral spread before content reaches wide audiences


Role and regulation of influencers, particularly child influencers, in shaping online safety culture


How to improve cross-platform collaboration when companies have competing interests


Suggested compromises

Focus enforcement efforts on end-to-end encrypted messaging services where technical blocking is possible without breaking encryption


Combine legal accountability measures with cultural prevention through long-term educational interventions


Use influencers positively in awareness campaigns while regulating harmful promotional activities


Implement cascade training models with two resource teachers per school for rapid issue escalation


Develop age-appropriate educational approaches that are engaging rather than threatening, such as using magicians for delivery


Create anonymous reporting mechanisms to address cultural barriers around shame and humiliation


Balance technical innovation in detection with privacy protection concerns


Coordinate awareness campaigns globally while allowing for local cultural adaptation


Thought provoking comments

This problem already affected large schools in Brazil, especially private schools, with several cases being reported by media outlets. The new trend challenges us to find an appropriate response, especially due to the fact that victims and perpetrators are minors, and the boundary between sexual experimentation and sexual abuse is becoming a little bit blurred.

Speaker

Juliana Cunha


Reason

This comment is deeply insightful because it identifies a fundamental challenge in addressing deepfake sexual abuse among minors – the blurring of traditional boundaries between normal adolescent sexual exploration and abuse. It highlights the complexity of creating appropriate responses when both victims and perpetrators are children, challenging conventional approaches to both prevention and punishment.


Impact

This observation shifted the discussion from purely technical and legal solutions toward recognizing the nuanced developmental and psychological aspects of the problem. It helped frame the issue as requiring more sophisticated, age-appropriate responses rather than simply applying adult-focused legal frameworks.


When I talk to young people, they say, actually, it’s not the nude profiles that are bothersome. If someone puts a bikini, but then a very sexual stance, is it a nude profile? This is where it really raises issues. If there’s a little bit of clothing, then they think they’re out of the category of a sexual fake profile.

Speaker

Janice Richardson


Reason

This comment reveals a critical gap between adult perceptions of harm and young people’s actual experiences. It challenges the binary thinking around what constitutes harmful content and exposes how perpetrators might exploit these gray areas. The insight shows how young people’s understanding of the issue differs from adult frameworks.


Impact

This comment prompted deeper consideration of how policies and education programs need to address the spectrum of harmful content, not just obvious cases. It highlighted the need for more nuanced approaches that consider how young people actually perceive and categorize these violations.


Young people also turn to me and say, you should be educating the adults about this because very often, it’s them, the ones that are suffering, the ones that are doing this, and the ones that don’t have the opportunity to tackle it.

Speaker

Janice Richardson


Reason

This comment is particularly thought-provoking because it inverts the typical assumption that children need to be educated by adults. Instead, it reveals that young people see adults as part of the problem and lacking understanding. This challenges the traditional top-down approach to digital safety education.


Impact

This observation led to a fundamental questioning of who should be the target of education efforts and how programs should be designed. It suggested that effective solutions require educating entire communities, not just young people, and that youth voices should be central to developing responses.


The misuse of DNA to create sexualized images of peers is not a tech issue or a legal gap. It’s a reflection of a broader systemic gender inequalities. That’s why prevention must also be cultural.

Speaker

Juliana Cunha


Reason

This comment reframes the entire discussion by identifying the root cause as systemic gender inequality rather than just a technological or legal problem. It’s insightful because it moves beyond symptom-focused solutions to address underlying cultural and social structures that enable this abuse.


Impact

This perspective shifted the conversation toward recognizing that technical and legal solutions alone are insufficient. It emphasized the need for long-term cultural change and gender equality work, influencing how other participants discussed prevention strategies and the importance of addressing social norms.


If we’re so clever with technology, why can’t we make something that when once we’ve put an image online, it becomes indelible, it becomes unchangeable. I’d like more efforts from the tech industry.

Speaker

Janice Richardson


Reason

This comment is thought-provoking because it challenges the tech industry’s priorities and capabilities. It questions why technological innovation isn’t being directed toward protecting users rather than just creating new features, and suggests a fundamental redesign of how digital content works.


Impact

This comment sparked discussion about the role and responsibility of tech companies, moving beyond just content moderation to considering fundamental changes in how platforms and digital content function. It challenged participants to think about proactive rather than reactive technical solutions.


Our core belief is the best way to protect children is by listening to them. This project challenged the usual top-down approach. Instead of responding with moral panic or punitive measures, we are asking how do teens actually experience this? What do they think would help?

Speaker

Juliana Cunha


Reason

This comment is insightful because it advocates for a fundamentally different methodology in addressing child protection – one that centers children’s voices and experiences rather than adult assumptions. It challenges the typical ‘moral panic’ responses that often characterize discussions about children and technology.


Impact

This perspective influenced the discussion by emphasizing the importance of youth-centered research and policy development. It reinforced the need for evidence-based approaches that actually reflect young people’s experiences rather than adult fears or assumptions about what children need.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches to addressing deepfake sexual abuse. They moved the conversation beyond simple technical and legal solutions toward recognizing the complex cultural, developmental, and systemic factors involved. The most impactful insights came from recognizing that young people’s experiences and perspectives differ significantly from adult assumptions, that the problem is rooted in broader gender inequalities rather than just technology misuse, and that effective solutions require listening to affected youth rather than imposing top-down approaches. These comments collectively shifted the discussion from a problem-focused to a solution-oriented dialogue that emphasized collaboration, cultural change, and youth empowerment as essential components of any effective response.


Follow-up questions

What is the role of influencers and the influencer industry in shaping online safety, particularly regarding kid influencers and young content creators?

Speaker

Mariana from DataSphere Initiative


Explanation

This addresses a gap in understanding how influential figures in digital spaces can either contribute to or help prevent deepfake abuse, especially given the rise of very young influencers


Why can’t there be a complete crackdown and ban on deepfake creation applications, and what are the technical and legal challenges in drawing the line on what apps to ban?

Speaker

Frances from Youthdig


Explanation

This highlights the need for clearer understanding of regulatory approaches and their feasibility in addressing the root accessibility problem


How can search engines be leveraged to prevent access to deepfake creation tools, given that they easily surface when searched?

Speaker

Robert Hoving from Safer Internet Centre Netherlands


Explanation

This identifies a potential intervention point that hasn’t been fully explored – controlling discoverability rather than just the apps themselves


What are effective global awareness strategies that can transcend legal jurisdictional limitations?

Speaker

Robert Hoving from Safer Internet Centre Netherlands


Explanation

This addresses the challenge that legal solutions may be limited by jurisdiction, but awareness campaigns could have broader reach


How can criminal law systems be adapted to handle cases where both victims and perpetrators are minors, particularly regarding existing frameworks like pedophile registers?

Speaker

Maciej Gronia from Polish Research Institute NASK


Explanation

This highlights a significant gap in legal frameworks that weren’t designed for peer-to-peer abuse among minors


How can CSAM detection and prevention be implemented in end-to-end encrypted messaging services without breaking encryption?

Speaker

Andrew Campling from Internet Watch Foundation


Explanation

This addresses a critical technical challenge in balancing privacy protection with child safety in encrypted communications


What solutions exist for maintaining children’s right to digital representation while protecting them from deepfake abuse?

Speaker

Torsten Krause from Digital Opportunities Foundation


Explanation

This highlights the tension between child protection and children’s rights to participate in digital spaces


What is the youth perspective on compromising privacy to detect CSAM and grooming in encrypted environments?

Speaker

Torsten Krause from Digital Opportunities Foundation


Explanation

This seeks to understand how the primary affected demographic views the privacy vs. safety trade-off


What preventative measures can students themselves take to combat deepfakes, beyond education and policy implementation?

Speaker

Claire, student from Hong Kong


Explanation

This seeks practical, actionable steps that young people can take independently to protect themselves


What detection systems and proactive mechanisms are being developed to identify and stop sexual deepfakes targeting minors before they go viral?

Speaker

Sana from NetMission


Explanation

This addresses the need for technical solutions that can intervene early in the distribution process


How can awareness of deepfake issues be improved in education systems when government and education departments are led by conservative voices who dismiss these issues?

Speaker

Yuri Bokovoy from Finnish Green Party


Explanation

This highlights political and institutional barriers to implementing educational solutions


How can regulations protecting children from deepfakes be designed to prevent misuse by authoritarian governments to silence free speech?

Speaker

Yuri Bokovoy from Finnish Green Party


Explanation

This addresses the critical balance between child protection and preserving democratic freedoms


Are the innovation measures mentioned in the Korean case study truly effective in practice?

Speaker

Zoom participant (unnamed)


Explanation

This seeks evaluation of real-world effectiveness of implemented technical solutions


Why can’t technology be developed to make images indelible and unchangeable once posted online?

Speaker

Janice Richardson


Explanation

This suggests a technical research direction that could prevent manipulation of images at the source


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.