Harnessing AI for Child Protection | IGF 2023

11 Oct 2023 04:00h - 05:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the discussion, multiple speakers expressed concerns about the need to protect children from bullying on social media platforms like Metta. They raised questions about Metta’s efforts in content moderation for child protection across various languages and countries, casting doubt on the effectiveness of its strategies and policies.

The discussion also focused on the importance of social media companies enhancing their user registration systems to prevent misuse. It was argued that stricter authentication systems are necessary to prevent false identities and misuse of social media platforms. Personal incidents were shared to support this stance.

Additionally, the potential of artificial intelligence (AI) in identifying local languages on social media was discussed. It was seen as a positive step in preventing misuse and promoting responsible use of these platforms.

Responsibility and accountability of social media platforms were emphasized, with participants arguing that they should be held accountable for preventing misuse and ensuring user safety.

The discussion also highlighted the adverse effects of social media on young people’s mental health. The peer pressure faced on social media can lead to anxiety, depression, body image concerns, eating disorders, and self-harm. Social media companies were urged to take proactive measures to tackle online exploitation and address the negative impact on mental health.

Lastly, concerns were raised about phishing on Facebook, noting cases where young users are tricked into revealing their contact details and passwords. Urgent action was called for to protect user data and prevent phishing attacks.

In conclusion, the discussion underscored the urgent need for social media platforms to prioritize user safety, particularly for children. Efforts in content moderation, user registration systems, authentication systems, language detection, accountability, and mental health support were identified as crucial. It is clear that significant challenges remain in creating a safer and more responsible social media environment.

Babu Ram Aryal

The analysis covers a range of topics, starting with Artificial Intelligence (AI) and its impact on different fields. It acknowledges that AI offers numerous opportunities in areas such as education and law. However, there is also a concern that AI is taking over human intelligence in various domains. This raises questions about the extent to which AI should be relied upon and whether it poses a threat to human expertise and jobs.

Another topic explored is the access that children have to technology and the internet. On one hand, it is recognised that children are growing up in a new digital era where they utilise the internet to create their own world. The analysis highlights the example of Babu’s own children, who are passionate about technology and eager to use the internet. This suggests that technology can encourage creativity and learning among young minds.

On the other hand, there are legitimate concerns about the safety of children online. The argument put forward is that allowing children unrestricted access to technology and the internet brings about potential risks. The analysis does not delve into specific risks, but it does acknowledge the existence of concerns and suggests that caution should be exercised.

An academic perspective is also presented, which recognises the potential benefits of AI for children, as well as the associated risks. This viewpoint emphasises that permitting children to engage with platforms featuring AI can provide opportunities for growth and learning. However, it also acknowledges the existence of risks inherent in such interactions.

The conversation extends to the realm of cybercrime and the importance of expertise in digital forensic analysis. The analysis highlights that Babu is keen to learn from Michael’s experiences and practices relating to cybercrime. This indicates that there is a recognition of the significance of specialised knowledge and skills in addressing and preventing cybercrime.

Furthermore, the analysis raises the issue of child rights and the need for better control measures on social media platforms. It presents examples where individuals have disguised themselves as children in order to exploit others. This calls for improved registration and content control systems on social media platforms to protect children’s rights and prevent similar occurrences in the future.

In conclusion, the analysis reflects a diverse range of perspectives on various topics. It recognises the potential opportunities provided by AI in various fields, but also points out concerns related to the dominance of AI over human intelligence. It acknowledges the positive aspects of children having access to technology, but also raises valid concerns about safety. Additionally, the importance of expertise in combating cybercrime and the need for better control measures to protect child rights on social media platforms are highlighted. Overall, the analysis showcases the complexity and multifaceted nature of these issues.

Sarim Aziz

Child safety issues are a global challenge that require a global, multi-stakeholder approach. This means that various stakeholders from different sectors, such as governments, non-governmental organizations, and tech companies, need to come together to address this issue collectively. The importance of this approach is emphasized by the fact that child safety is not limited to any particular region or country but affects children worldwide.

One of the key aspects of addressing child safety issues is the use of technology, particularly artificial intelligence (AI). AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues. For example, AI can disrupt suspicious behaviors and patterns that may indicate child exploitation. Technology companies, such as Microsoft and Meta, have developed AI-based solutions to detect and combat child sexual abuse material (CSAM). Microsoft’s PhotoDNA technology, along with Meta’s open-sourced PDQ and TMK technologies, are notable examples. These technologies have been effective in detecting CSAM and have played a significant role in safeguarding children online.

However, it is important to note that technology alone cannot solve child safety issues. Law enforcement and safety organizations are vital components in the response to child safety issues. Their expertise and collaboration with technology companies, such as Meta, are crucial in building case systems, investigating reports, and taking necessary actions to combat child exploitation. Meta, for instance, collaborates with the National Center for Missing and Exploited Children (NECMEC) and assists them in their efforts to protect children.

Age verification is another important aspect of child safety online. Technology companies are testing age verification tools, such as the ones being tested on Instagram by Meta, to prevent minors from accessing inappropriate content. These tools aim to verify the age of users and restrict their access to age-inappropriate content. However, the challenge lies in standardizing age verification measures across different jurisdictions, as different countries have different age limits for minors using social media platforms.

Platforms, like Meta, have taken proactive steps to prioritize safety by design. They have implemented changes to default settings to safeguard youth accounts, cooperate with law enforcement bodies when necessary, and enforce policies against bullying and harassment. AI tools and human reviewers are employed to moderate and evaluate content, ensuring that harmful and inappropriate content is removed from the platforms.

Collaboration with safety partners and law enforcement is crucial in strengthening child protection responses. Platforms like Meta work closely with safety partners worldwide and have established safety advisory groups composed of experts from around the world. Integration of AI tools with law enforcement can lead to rapid responses against child abuse material and other safety concerns.

It is important to note that while AI can assist in age verification and protecting minors from inappropriate content, it is not a perfect solution. Human intervention and investigation are still needed to ensure the accuracy and effectiveness of age verification measures.

Overall, the expanded summary highlights the need for a global, multi-stakeholder approach to address child safety issues, with a focus on the use of technology, collaboration with law enforcement and safety organizations, age verification measures and prioritizing safety by design. It also acknowledges the limitations of technology and the importance of human interventions in ensuring child safety.

Michael Ilishebo

Content moderation online for children presents a significant challenge, particularly in Zambia where children are exposed to adult content due to the lack of proper control or filters. Despite the advancements in Artificial Intelligence (AI), it has not been successful in effectively addressing these issues, especially in accurately identifying the age or gender of users.

However, there is growing momentum in discussions around child online protection and data privacy. In Zambia, this has resulted in the enactment of the Cybersecurity and Cybercrimes Act of 2021. This legislation aims to address cyberbullying and other forms of online abuse, providing some legal measures to protect children.

Nevertheless, numerous cases of child abuse on online platforms remain unreported. The response from platform providers varies, with Facebook and Instagram being more responsive compared to newer platforms like TikTok. This highlights the need for consistent and effective response mechanisms across all platforms.

On a positive note, local providers in Zambia demonstrate effective compliance in bringing down inappropriate content. They adhere to guidelines that set age limits for certain types of content, making it easier to remove content that is not suitable for children.

Age-gating on platforms is another area of concern, as many children can easily fool the verification systems put in place. Reports of children setting their ages as 150 years or profiles not accurately reflecting their age raise questions about the effectiveness of age verification mechanisms.

META, a platform provider, deserves commendation for their response to issues related to child exploitation. They prioritize addressing these issues and provide requested information promptly, which is crucial in investigations and protecting children.

The classification of inappropriate content poses a significant challenge, especially considering cultural differences and diverse definitions. What might be normal or acceptable in one country can be completely inappropriate in another. For example, an image of a child holding a gun might be considered normal in the United States but unheard of in Zambia or Africa. Therefore, the classification of inappropriate content needs to be sensitive to cultural contexts.

In response to the challenges posed by online child protection, Zambia has introduced two significant legislations: the Cybersecurity and Cybercrimes Act and the Data Protections Act. These legislative measures aim to address issues of cybersecurity and data protection, which are essential for safeguarding children online.

To ensure child internet safety, a combination of manual and technological parental oversight is crucial. Installing family-friendly accounts and using filtering technology can help monitor and control what children view online. However, it is important to note that children can still find ways to outsmart these controls or be influenced by third parties to visit harmful sites.

In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones implemented in Zambia, combined with the use of protective technologies and active parental oversight, are essential. Additionally, close collaboration between the private sector, governments, the public sector, and technology companies is crucial in addressing challenges in policy cyberspace. While AI plays a role, it is important to recognize that relying solely on AI is insufficient. The human factor and close collaboration remain indispensable in effectively protecting children online and addressing the complex issues associated with content moderation and classification.

Jutta Croll

The discussions revolve around protecting children in the digital environment, specifically addressing issues like online child abuse and inappropriate communication. The general sentiment is positive towards using artificial intelligence (AI) to improve the digital environment for children and detect risks. It is argued that AI tools can identify instances of child sexual abuse online, although they struggle with unclassified cases. Additionally, online platform providers could use AI to detect abnormal patterns of communication indicating grooming. However, there is concern that relying solely on technology for detection is insufficient. The responsibility for detection should not rest solely on technology, evoking a negative sentiment.

There is a debate about the role of regulators and policymakers in addressing these issues. Some argue that regulators and policymakers should not tackle these issues, asserting that the responsibility falls on platform providers, who have the resources and knowledge to implement AI-based solutions effectively. This stance is received with a neutral sentiment.

The right to privacy and protection of children in the digital era presents challenges for parents. The UNCRC emphasizes children’s right to privacy, but also stresses the need to strike a balance between digital privacy and parental protection obligations. Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of accountability. This viewpoint is given a negative sentiment.

Age verification is seen as essential in addressing inappropriate communication and content concerns. A lack of age verification makes it difficult to protect children from inappropriate content and advertisers. The sentiment towards age verification is positive.

Dialogue between platform providers and regulators is considered crucial for finding constructive solutions in child protection. Such dialogue helps identify future-proof solutions. This argument receives a positive sentiment.

Newer legislations should focus more on addressing child sexual abuse in the online environment. Newer legislations are seen as more effective in addressing these issues. For instance, Germany amended its Youth Protection Act to specifically address the digital environment. The sentiment towards this is positive.

The age of consent principle is under pressure in the digital environment as discerning consensual from non-consensual content becomes challenging. The sentiment towards this argument is neutral. There are differing stances on self-generated sexualized imagery shared among young people. Some argue that it should not be criminalized, while others maintain a neutral position, questioning whether AI can determine consensual sharing of images. The sentiment towards the stance that self-generated sexualized imagery should not be criminalized is positive.

Overall, the discussions emphasize the importance of child protection and making decisions that prioritize the best interests of the child. AI can play a role in child protection, but human intervention is still considered necessary. It is concluded that all decisions, including policy making, actions of platform providers, and technological innovations, should consider the best interests of the child.

Ghimire Gopal Krishna

Nepal has a robust legal and constitutional framework in place that specifically addresses the protection of child rights. Article 39 of Nepal’s constitution explicitly outlines the rights of every child, including the right to name, education, health, proper care, and protection from issues such as child labour, child marriage, kidnapping, abuse, and torture. The constitution also prohibits child engagement in any hazardous work or recruitment into the military or armed groups.

To further strengthen child protection, Nepal has implemented the Child Protection Act, which criminalises child abuse activities both online and offline. Courts in Nepal strictly enforce these laws and take a proactive stance against any form of child abuse. This indicates a positive commitment from the legal system to safeguarding children’s well-being and ensuring their safety.

In addition to legal provisions, Nepal has also developed online child safety guidelines. These guidelines provide recommendations and guidance to various stakeholders on actions that can be taken to protect children online. This highlights Nepal’s effort to address the challenges posed by the digital age and ensure the safety of children in online spaces.

However, ongoing debates and discussions surround the appropriate age for adulthood, voting rights, citizenship, and marriage in Nepal. These discussions aim to determine the age at which individuals should be granted certain legal landmarks. The age of consent, in particular, has been a subject of court cases and controversies, with several individuals facing legal consequences due to age-related consent issues. This reflects the complexity and importance of addressing these issues in a just and careful manner.

Notably, Ghimire Gopal Krishna, the president of the Nepal Bar Association, has shown his commitment to positive amendments related to child rights protection acts. He has signed the Child Right Protection Treaty, further demonstrating his dedication to upholding child rights. This highlights the involvement of key stakeholders in advocating for improved legal frameworks that protect the rights and well-being of children in Nepal.

Overall, Nepal’s legal and constitutional provisions for child protection are commendable, with specific provisions for education, health, and safeguarding children from various forms of abuse. The implementation of the Child Protection Act and online child safety guidelines further strengthens these protections. However, ongoing debates and discussions surrounding the appropriate age for various legal landmarks highlight the need for careful consideration and resolution. The commitment of Ghimire Gopal Krishna to positive amendments underscores the importance of continuous efforts to improve child rights protection in Nepal.

Session transcript

Babu Ram Aryal:
somewhere around the world and good afternoon, maybe late evenings somewhere. This is Baburam Aryal. I’m a lawyer by profession and I’m from Nepal and I lead Digital Freedom Coalition in Nepal and moderating this session for today and I have a very distinguished panel here to discuss artificial intelligence and child protection issues in contemporary world. Let me briefly introduce my esteemed panelist, my next to me, senior advocate Gopal Krishna Gimire. He is the president of Nepal Bar Association and he brings more than 30-35 years experience of litigation in Nepal and Yuta Kral is a very senior child right protection activist and she is leading her organization and contributing through dynamic coalition on child rights and Sarim Aziz is policy director for South Asia at META and having long experience on platform issues and protection of child rights and other issues as well and next to Sarim, Michael is there and Michael is directly dealing with various issues. He belongs to Jambian police and senior official at Jambian police and especially focusing on cybercrime investigation and digital forensic analysis. So introducing my panel, I have my colleague Ananda Gautam who is moderating online participants and I would like to begin with a very brief concept that what is the objective of today’s discussion. Just right inside I am seeing two kids, coincidentally they are my kids as well. They are very passionate with the technology, they are very keen on using internet and we have a big discussion whether we allow our kids to give access to technology and the connectivity and our experience shows that allowing them in the platforms are opportunity for them. They are growing themselves with new regime, new world and they have created their own set of world in their own way. I see sometimes I fear whether I’m leading my kids to very risky world or not and this leads to me to engage at this issue, technology and the risk and technology and the opportunity. Now artificial intelligence has taken over most of the human intelligence in various area of work like education, law and other area of profession and artificial intelligence is giving opportunity, lots of opportunities are there but simultaneously there are some risk as well. So in this discussion we’ll take the artificial intelligence issues and the child protection issues and harnessing child protection through artificial intelligence. There are various tools available around the world and these are accessible to all the segment of people including child and elderly people. So I’ll come to at the beginning I’ll go to Michael whose responsibility is dealing with this kind of issues regularly. I’ll go to Michael, what is your personal experience from your department, what are the major issues that you have experienced and once we hear from you then we’ll take this discussion at further level.

Michael Ilishebo:
Michael. Good afternoon, good morning, good evening. So as a law enforcement officer dealing with a cyber crime and digital forensic issues moderating of content online from the human side and from the AI side has posed both a challenge to our little ones in the sense that speaking from somebody from the developing world is that we are mostly consuming news or any other form of entertainment or any other form of content online that is not generated in our region. Of course we’re not generating our own content but the aspect of like being a gatekeeper as parents or using technology to filter that content which is not supposed to be shown or exposed to the little one has become a little bit of a challenge. I’ll give you like a simple example. If you are working, you are analyzing a mobile device from maybe a child who can do a child who is maybe 16, the content that you find in their phones, the data that they’ve in terms of the browsing history, like there is no control. So whatever is exposed to an adult ends up being exposed to a little one. As a result it has become a little bit challenge in terms of addressing issues of content moderation on both fronts. Of course there could be some aspects of AI that could help moderate some of this content but at the end of it or if we remove the human factor out of it, AI will not be able to address most of the challenges that we are facing right now. Further on in terms of combating crime or combating child exploitation incidences, you will find that most of these sites that host most of the content, despite them having clear guidance and policing on gatekeeping in terms of age, our children still find their way in places online that they’re not supposed to. Of course there’s no system that will detect the use of a phone to indicate their age or their gender as a human being would. It still remains a challenge in the sense that once a phone is in the hands of a minor, you don’t have control over what they see, you don’t have control on what they do with it. So basically it has become a serious challenge on the part of the little ones and us enforcing cyberspace to ensure that the little ones, the minors, are protected from content that is not supposed to be exposed to them. Thanks Michael. I would like to know your experience. I hope I belong to Nepalese society and Zambian society might be similar from education and all these things. What are the trends of abuse cases in Zambia? Do you remember any trends? So basically in terms of abuse, Zambia like any other country has those challenges. So basically I’ll give an example. Of late the talks on child online protections have been gaining momentum. There’s been some clear guidelines from government to ensure that issues of child online protection, data privacy, issues to do with the safety and security of everyone including the little ones online has been gaining momentum through the enactment of various suggestions. Like we have the Cybersecurity and Cybercrimes Act of 2021 which clearly has now outlined the types of cyberbullying which are outlawed. So basically if you go on social media platforms such as Facebook, Tik Tok, Instagram, Snapchat and all those, most of the bad actors who are engaging in these bad activities of either sending bad images to the children or any other content that we deem is inappropriate, most of them have been either arrested or talked to depending at times it’s there within their age range. They share these things among their minors. If it’s a minor of course you talk to them, you counsel them, you try to bring them to sanity in terms of their thinking. But if it’s an adult you have to know their intentions. So one of our experiences that the law itself is slowly addressing some of these challenges that we are facing. But again that does not stop there. There are a lot of cases or scenarios that remain unreported. So it is difficult for us to literally address those challenges. But in a nutshell I would literally tell you that the challenges are there, the problems are there, but of course addressing them is not a one-day issue. It’s about continuous improvement and continuous usage of the law and technology based especially from the service providers to address some of these challenges.

Babu Ram Aryal:
Thanks Michael. I’ll come to Utah. Utah, you have been engaged in child protection since long. You have good experience. We are seeing each other in IGF for several times and then shared the discussion as well. You also belong to, you are also member of Dynamic Coalition on Child Rights. So what is your personal experience from protection issues and ethical legal issues on protection of children online especially when AI is significantly contributing and intervening on these platforms? Utah.

Jutta Croll:
Yeah, thank you so much for not only inviting me but posing such an important question to me. First of all I would like to say you introduced me as an expert in child protection issues and you may know that the Dynamic Coalition even changed their name from Child Online Safety Coalition to Children’s Rights Coalition in the digital environment. I think it’s important to put that right from the beginning that children have a right to protection and to provision and to participation. So we always need to look for a balanced approach of these areas of rights. And of course when it comes to artificial intelligence I would like to quote from the General Command Number 25 to the Convention on the Rights of the Child. You may know that the rights of the child have been laid down in 1989 when although the Internet was there it was not designed to be used by children. And the UN Convention doesn’t refer in any way to the Internet as a means of communication, of access to information and so on and so on. So that was the reason why four or five years ago the Committee of the United Nations, the Committee on the Rights of the Child decided to have such a general command in regard of children’s rights in the digital environment to give more a closer look into what it means that children are now living in a world that is mainly affected by use of digital media and look into how we can protect them. And in one of the very first articles of this general command it says explicitly that artificial intelligence is part of the digital environment. It’s not only a single thing, it’s woven into everything that now means the digital environment. So it’s therefore necessary to have a look whether artificial intelligence can be used or is able to improve digital environment for children, whether it can help us to address risks that have already been mentioned by Michael, whether it can help to detect content that is on the one hand harmful for children to be watched on the Internet, but also for content that is directly related to the abuse of children, which is where we are talking about child sexual abuse imagery. But nowadays, and that is also due to the use of artificial intelligence and new functionalities and technologies that the Internet is used to perform online live sexual abuse of children. And that is also where we have to have a look at what artificial intelligence, how it can be beneficial to detect these things, but also where it might pose additional risks to children. And I stop at that point and I’m pretty sure we will go deeper into that. I’ll come to on detection side on next round. Jutta, can you save me some more issues from ethical and legal side, if you can say some lights on this. You mean the ethical and legal side of detection of harmful of child sexual abuse imagery in general? Ethical issues of use and misuse of technology and platforms. Okay, I do think that the speaker on my left side has much more to say about the technology behind that. What I can say so far from research is we need both. We need to use, to deploy artificial intelligence to become, yes, to monitor the content, to find and detect the content for the benefit of children. But still I’m pretty much convinced that we cannot give that responsibility to the technology alone. We also need

Babu Ram Aryal:
human intervention. Thanks. Initially in my sequence, Mr. Gopal was next to you, but as you just referred, I’ll go to Sarim first and then come back to Gopal. So Sarim, now you have two very significant opinions on the plate and to respond on that and again the same questions that I like to ask. Meta platforms are significant for not only for kids, for all of us, but kids are also coming to various platforms and not only meta platforms, we neutrally discuss as platforms. So what is your thoughts on this? What are the major issues on the platforms, including meta platforms, about the opportunities? Of course, as you rightly mentioned, first comes rights and then only if any violation, then production is there. So, Sarim, can you share some of your thoughts? Thank you,

Sarim Aziz:
Babu, and honored to be here to talk about this very serious issue and humbled, obviously, with the speakers here. I think, as they’ve previously said, I just want to just reiterate that this is a global challenge that requires a global response and a multi-stakeholder approach, and I think law enforcement alone can’t solve this, tech industry alone cannot solve it. So this is one where we require civil society. We need families. We need parents. So that’s how we have, as META, have approached this issue. And so we work on all those fronts in terms of industry. I think this is also a good example where the child rights and the child safety industry actually can be an example for many other areas, actually, like terrorism and others, because we are part of a tech coalition, which was formed in 2014. Microsoft, Google are also part of that. That’s been in an excellent forum for us to collaborate and share best practices and come together to address this challenge. And we’re actually, in 2020, as part of Project Protect, we committed to expanding the scope of that to protecting kids and thinking about child safety, not just preventing the most harmful type of stuff, which is the CSAM and other things, but also keeping kids safe. So I think if I were to summarize the META’s approach, we look at the issue in three buckets. So the first would be prevention. And this is important, because this is where AI has a role to play in all of these three areas. So when you think about prevention, we have something called, for example, search deterrence. So when someone is going out there on platforms and trying to look for such content, I think Michael, at one point, talked about pre-crime. So I think we actually do AI and type of heads use is based on AI as well, in terms of what people are typing. We prevent searches coming up within Facebook and other Instagram and other search mechanisms to prevent such content from surfacing. And if people are intentionally trying to type this stuff, we actually give them warnings to say, this is actually harmful and illegal content that you’re trying to look up, and divert them towards support mechanisms. So that’s pretty important for prevention. Also, if you think about the bad behavior, sometimes kids are vulnerable, and they might get friend requests from people who are adults, or they’re not even connected to, strangers. So now we actually have in-app advice and warnings popping up to them to say, you shouldn’t accept people, accept friend requests from strangers. This person is not even connected to your network. So those are things that AI can actually help and detect and just surface, like in-app advice, safety warnings, notices, and also preventing unwanted interactions. So we actually do intervene and disrupt those types of suspicious behaviors when we detect that using AI. So prevention is that one bucket where we are optimistic and excited about what AI can do to prevent harm from occurring. The second bucket is the large part of the discussion that we’ve seen already around detection. Detecting CSAM has been a large, I think, for over a decade, it’s been a large focus for the industry, using, obviously, technology like PhotoDNA, which was initially built by Microsoft, and we’ve actually built on top of that, where we now have photo and video matching technology that Meta has open sourced, I believe, just recently. That’s called PDQ, as well as TMK, which is for video matching. So that’s been open sourced on GitHub. So now, yeah. A bit of clarification about PDQ and TMK. You know, the audience may not know. Yeah, I mean, those acronyms are easier to Google, PDQ. And these are basically, it’s like built on top of PhotoDNA, but it’s been open sourced so that any platform, any company. So we want to make sure that this is something that Meta truly believes in open innovation, that bad actors, they will use technology in multiple ways. I think our best defense is to open source and this technology, make it accessible to more safety experts and more companies out there. You don’t have to be as large as Meta to be able to implement child safety measures. Now, if you’re an emerging platform in Zambia or in any other country, you can take this technology and ensure that you prevent both sort of spread of this type of CSAM content, but also detection and sharing of hashes and digital signatures to detect CSAM. So that’s where it helps for both photos and videos. So it’s called PDQ for photos and TMK plus PDQF for videos. That’s been open sourced on GitHub for any developers and other companies to take. And this also helps for transparency. And Ampreet talks about ethics. Like, you know, this shows the tech that we use so we can be externally audited on what’s the kind of technology that we use to detect this. This is also technology we use internally for, obviously, training our algorithms and detection, you know, and machine learning technology to ensure that we are able to detect these kinds of contents. And lastly, the most important issue where AI is also helping is response. And that’s where law enforcement comes in and other civil society and safety organizations like the National Center for Missing and Exploitative Children, they’re a very important partner for Meta and other companies, where, you know, anytime we do detect CSAM content, we actually even help them build a case system using the same technology that I mentioned. So if it is youth, for example, that are dealing with non-consensual imagery issues, you know, that they’ve put up themselves. And so there’s a project called Take It Down that’s been launched by NECMEC, which helps. And that’s cross-platform. It’s Meta’s on there. TikTok’s part of it. Other companies are part of it, where those images can be prevented from spreading. So those are important initiatives. And that response and closing that loop with NECMEC that works with law enforcement around the world, they have a cyber tip helpline that helps law enforcement in their responses is really critical. So I think I’ll just pause there. But I think that’s sort of the three areas where we see technology as well as AI is playing a very important role in preventing, detecting, and responding to these child safety issues. Thank you.

Babu Ram Aryal:
Thank you, Sharim. One very interesting issue that governments in the developing world are complaining about the platform operators, that platform operators are not cooperating in the investigation issues when, from a developing country, when they don’t have much technology to catch the bad people. Michael, I’ll come back to Gopal again. You just sparked that question. That’s why I’m going to Michael. Michael, what is your experience while dealing with these kind of issues, especially what are the response from platform providers on child abuse cases online?

Michael Ilishebo:
So basically, that depends on which platform that content is on. Facebook has been a little bit responsive. They are responding. Instagram, they’re responding. TikTok, being a new platform, we’re still trying to find ways and means of engaging their law enforcement department, liaison department. Also, we’ve seen an increase in terms of local providers. Those ones, it’s much easier for them to bring down the content. It’s much easier for them to follow the guidelines of putting an age limit to whatever they are posting. If it’s a short video, if it contains a bit of some violence, contains some nudity, or any other feature we can deem to be inappropriate for a child, they are required to do the correct thing of either block the age in terms of it being accessed on Facebook. Because if I joined Facebook and I entered my age as 13, so that content will not reflect on my timeline or on my feed because of my age. But as I’ve said earlier on, it’s difficult to monitor a child who’s opened their own Facebook account because they’ll just make themselves 21. You’ve seen on Facebook, there are people who are 150 years old. You check on their birthday, they say this person is 120 years old. Because this platform themselves, like Facebook, does not actually help us in addressing the issues of age-getting. So basically, as a way of addressing most of these challenges, I’ll restrict myself to META because they can answer to the question, to any issue I’m going to raise, because they are part of the panel. I can’t discuss about Google, I can’t discuss about any other platform, which is not here. So META has been responsive, though in a way, at times it is slow. But based on their law enforcement portal, issues of child exploitations are given priority. Issues to do with, probably, freedom of expression, those ones may be a little bit slow. But on META’s part, I would still give them 100% because within the shortest period of time, when you request either for a takedown of data or for information behind this account in that, they will still provide you within the shortest period of time. So my experience with META so far has been OK. Thank you.

Sarim Aziz:
Thank you for that. That was not pre-scripted. I had no idea what Michael was going to say, but thank you for that feedback. I did want to just comment on the age verification issue. I think that’s something that’s obviously in discussion with experts around the world in different countries, lots of discussions going on. But we at META, we are testing some age verification tools that we started testing in June in some countries. And based on initial results, we see that we have about 96% of teens who tried to change their birth date. We were able to actually stop them from doing that. And so again, I don’t think any tech solution is going to be perfect, but there are attempts being made to figure out what is. This is on Instagram, by the way, this age verification tool that we have. And we hope to, based on those results, expand it further to prevent minors from seeing content that they shouldn’t be seeing, even if they’ve tried to change their age and things like that. Just wanted to comment on that.

Babu Ram Aryal:
Thanks, Sarim. Now, finally, I’ll come to Mr. Gopal. So we have discussed various issues from technical perspective, some direct enforcement perspective as well. And Utah has discussed certain issues. And NC also referred child rights convention as well. As a long practicing lawyer, what do you see in your country perspective from Nepalese context? What are the major legal protections for children, especially when we talk about the online protection over the online platforms? Yeah, please.

Ghimire Gopal Krishna:
Thank you, Babu. Thank you very much for giving me this opportunity to say something first about my country. Of course, I am representing Nepal Bar Association. That means the human rights protector, the institution of human rights protector. Basically, we have four subjects we are just focusing on. Just first is we are human rights. We deal with human rights. Secondly, the democracy, the rule of law, and the human this. The fourth issue is the independent of judiciary, the issue of independent of judiciary. Of course, being a human rights protector, we have to focus on the child right issue too. This is our duty, that we are focusing on the human child right issue. You know, in our present constitution, Article 39 explicitly says that we have right to child. Every child shall have a right to name, birth, and recognition along with his or her identity. Every child shall have a right to education, health, maintenance, proper care, sports, entertainment, and overall personality development from the families and the state. And every child shall have the right to elementary child labor, elementary child labor, child development, and child participation. No child shall have engaged in any factory. This is the important right for a child in Nepal’s constitution. Mine or similar other hazardous work, no child shall be subjected to child marriage, transported illegally, and kidnapped or taken hostages. No child shall be recruited or used in the army, police, or any armed group, or to be subjected in the name of culture or religious traditions to abuse, exclusion, or physical, mental, sexual, or other form of exploitation or improper use by any means or any manner. And no child shall be subjected to physical, mental, or any other form of torture in home, school, or other places, and condition, whatever. So this is the constitutional right. You mean a very clear protection of child and abuse of children online as well. That reflects in the constitution, you mean? Yes. In our constitution, we have clear provisions for protection of child rights. And we have Child Protection Act also. The Child Protection Act, it criminalizes the child abuse and activities, online and offline. Child abuse activities, online and offline both. And we have a child, this pedophile cases. And the courts in Nepal is very strictly, very strictly prohibit such type of activities. It is clearly our court is in practice. And we have this online child safety guidelines also. Online child safety guidelines explicitly told that this provide recommendations of action to the different stakeholders. And though we have not gone through AI, and not even think about it, I would like to say not even think about it, but our constitution, whatever we have the legal provisions, and whatever we have the legal constitutional provision, the child right is in this phase. It’s especially our constitution and our legal framework, especially very close to the child protection issues, what would I like to say. And this, I can say that child right is our focus. And child right is the core issue for our legal provisions, constitutional and legal provisions too.

Babu Ram Aryal:
Thank you. I’ll go to the next round of discussion of this session. Basically, when we propose this workshop, our workshop is harnessing AI to child protection, right? So I’ll come to Sarim first. How technology are leveraging production of child online, especially when AI tools are available? What are the tools? What are the models? And how this can leverage on protecting child online?

Sarim Aziz:
Thanks, Babu. So yeah, I think I’ll connect a bit deeper into sort of my overview I mentioned. So as I mentioned, AI has been a critical component of online child safety, prevention, detection, and response for a very long time. So this is actually, even though I think the gen-AI discussion has sort of maybe hyped the interest around AI for child safety, it’s been a very critical component of that response, as I mentioned. So the most obvious one, as I mentioned, is the CSAM, child sexually abusive material. And it started with Microsoft 10 years ago with the photo DNA technology, which has evolved and we’ve open-sourced our own since then and you know that work on detection is the most crucial because that also helps with prevention, detecting things at scale, especially we have a platform of 3.8 billion users so you know we want to prevent such content from people even seeing it, from people from even uploading it and then also that involves a lot of, still requires a lot of human expertise, that’s important. I don’t think it’s completely like humans are not involved because you know making sure you’ve got a large high quality data sets of CSAM material to train the AI to be able to detect this is sort of requires a lot of human intervention and you know and we still need human reviewers for things that maybe AI cannot detect so that there’s definitely a challenge with gen AI in terms of maybe on the production side where you know people might be producing more of this more easily but I don’t think, I think we are you know on our side we’ve got the defenses ready to build on and improve on to make sure that we’re able to leverage AI to also detect those kinds of things. I think there’s a lot more work to do in that space but we have a good, the industry has done well in terms of leveraging AI on the detection side. I think the prevention side is to us is more exciting because that’s something new that we’ve focused on in terms of for user education, youth education, preventing them from interactions that are suspicious that are you know with strangers and adults that that they shouldn’t be having. I think this issue of you know the parental supervision is an interesting one we obviously have parental controls built into our products into Facebook and Instagram do we we believe that parents and guardians know best for their kids in terms of but at the same time you know there are there are obviously privacy issues that we also have to consider so those are some of the ethical discussions that are ongoing but yeah I think though so I think prevention detection are excellent I think the response side this child safety is one of the few areas where we the partnerships like neck back and multi-stakeholder responses are so critical to ensure that you know we’re able to work with safety partners all around the world law enforcement around the world we have a safety advisory group as well of 400 experts from around the world that advise us on these sort of products and our responses. A very quick follow-up question sorry you just mentioned that we have safety partners and how it works especially while protecting protecting child there are various community standards and then you know there are even some countries they their age of though CRC has very specific age group on minority and majority and and certain even in my country there are some debates in recent past that though it’s the CRC says 18 years and and our local child legislation also says 18 years but it’s still even in Parliament there was some discussion that we should reduce the age of minority majority threshold so so how dealing with a different legislative regime platform operators work on combating these kind of issues. Yeah I think those discussions are ongoing as we speak in many countries on terms of what is the right age to you know at what age do you require parental consent and so those I mean I everyone will have a different opinion on that I think what we’re trying to really focus on is that of course on Metas platforms for most products you need to be at least 13 to create an account in some jurisdictions it’s different so we obviously respect law wherever we operate however I think our focus is really on regardless of whether you’re 13 or 14 like is it’s the is the the nature of the interaction and making sure that you are safe and same that whether if there’s violent content we have something you know what we call like marked as disturbing so we you know as well even for for adults actually so I think there’s making sure that minors don’t even see the content like that but also if even if adults see it the day you know AI actually helps us to make sure that this might even make someone who’s 18 uncomfortable so we have technologies on that as well so I mean age is obviously a number but at the same time you know we need to make sure the protections are in place the systems are in place to protect youth in general whether they’re 13 or 14 or 17.

Michael Ilishebo:
thanks Michael you’re like you want to respond on this year I’m just adding on to what he has said among the issues that is a little bit challenging is a classification of inappropriate content I’ll give an example mmm under meta platform 13 years is the minimum age one can join Facebook but based on our laws and standards in the countries that we come from 13 years is deemed to be a child who can’t even probably own a cell phone the second part is the content themself an image of violence probably in a cartoon form or music with some vulgar violence content or anything that may be deemed inappropriate for Zambia might actually be deemed appropriate say for the u.s. a child holding a gun in Zambia or in Africa either through the guidance of the parent or without it’s literally something that is unheard of but in the u.s. we’ve heard of children going with guns in schools doing all sorts of things we’ve seen images where if you look at it as a parent you’d be worried but that image is there on Facebook and he’s been accessed by another child in another jurisdiction where it is not deemed to be offensive so issues of clarification themselves they’ve played a challenging role up just to add what is he said thanks Michael Utah yes thank

Jutta Croll:
you for giving me the floor again I I would like to go a bit more into where a I can be used or probably also where it where it can’t be used and some of you may know that there is already a new draft regulation in the European parliamentarian process on child sexual abuse which differentiates between three different it’s not different types but it’s three things to be addressed one is already known child sexual abuse imagery which Sarum has described very well it is possible to detect that with a very low false positive rate due to photo DNA and the improvement that already matter and other companies have made during the last years have led to all also being able to detect video contact with which was quite difficult some years ago it’s it has become much better then the second part is not yet known sexual abuse imagery so the new products and they are coming in in a huge amount in huge numbers of images and videos are uploaded every day and of course it’s much more difficult to detect this imagery that have not been classified as being child sexual abuse imagery and the false positive rate is much higher in this regard and then the third part which is the most difficult is detection of grooming processes where children are groomed into contact to a stranger in order to be abused either online or to produce self-produced sexualized content of some themselves and sending that to to the grooming person so we we know that these different areas react to different artificial intelligence strategy in a different way and the most difficult is the part in grooming where where obviously if you haven’t the means to look into the content because the content of the communication might be encrypted then you would need to use other strategies to to detect patterns for example of the type of communication one sender addressing a thousand different profiles to get in contact in the in the expectation that at least maybe 1% of these of these addresses will react to the grooming process and getting in contact with the person so and that’s I think it’s where talking about shared responsibility that could not be done by the regulator it could not be done by the policymaker but it could be done by the platform providers because you have the knowledge I do think you have the resources to look deeper into these new developments and try to find a technological way based on AI to address these issues and I push the ball back to you because I’m pretty sure you can answer to that.

Sarim Aziz:
Thank you actually that’s exactly I think the area of focus for us in recent times is to focus on preventing grooming and that’s where AI is playing a very key role as I mentioned on just preventing that in you know unwanted interaction between an adult and a teen you know so we’ve changed things like for example just preventing the default settings of a youth account would not be able to message you know like a stranger so that and also even on comments of public information so if a comment that’s done by a youth for example it will not be visible to an adult so we’re actually trying to kind of reduce that sort of unwanted interaction it’s still early days for this but I think we’ve taken measures already we haven’t waited to know we know this is the right thing to do in terms of you know ensuring that adults are not able to discover teen content so in the in Instagram for example in our discovery reel you won’t see any youth content there that’s you know and same with whenever we detect this sort of any attempts a friend request as I mentioned that was an example where it’s someone who’s not in your network we do give warnings to teens just and that’s an opportunity to educate to say like shouldn’t be this person is a stranger you shouldn’t be accepting a friend request to discourage them so I think you’re right this is the the right focus for us to kind of continue using technology and AI to prevent sort of grooming and and protecting sort of unsuspicious interactions and unwanted interactions between you know teens and adults very very a significant issue Utah just referred that detection of grooming process of a child and in platform I have myself dealt with certain cases in Nepal as well so it’s also Michael raised is classification of use of platforms and and there are various categories of you is who gets the connected with the platforms as a business or platform providers one of platform providers maybe my question could be a law enforcement issue but from accountability perspective if if it is seen that a platform is used for a long grooming of child and and leading of significant abuse to child then do you see the as you rightly mentioned that share accountability do you see the platform also should share the accountability of that serious incident not only the matter of law enforcement or you’re not clear my question sorry I think platforms definitely have a responsibility to keep their users safe and and I think as Michael alluded to it we you know as I said this is a global issue requires a global response we have to do our part in that and we do that by using that making sure we create the product by having safety by design and some of these changes we’re making is literally safety by design like when we’re developing these these these features to make sure that how would how would you you know use this and how could they how could we keep them safe you know even even like you know when you’re we don’t suggest for example adults and your people friends you know like things like that so these are this is safety by design right in the product but beyond that when something bad happens absolutely we you know we do work very closely with law enforcement from around the world including with NACMEC through NACMEC as well when we see a situation where you know a child is in danger and and many times I mean you won’t read about it in the paper but platforms do cooperate and they they reach out to law enforcement with the information that they see to ensure that you know child’s again and or anyone can can be kept safe at least that’s that’s my view I obviously can’t speak for behalf of every platform but that’s how we operate at Meta.

Babu Ram Aryal:
I have two questions for the panelists but before going to those questions it’s my questions in future will be there with privacy and the future strategy before that I’ll take a few questions from the audience as well and I open the floor for your questions if you have any questions from the floor I would like to welcome yes so thank you so much for the conversations the question about please introduce me oh yeah sorry my

Audience:
name is Sumana Shrestha I’m a parliamentarian from Nepal when it comes to protecting children one of the other thing we also need to protect them from is bullying right so you’ve got so many different languages what is what are what is Metta for example doing about content moderation in different countries in which it’s used it’ll be great to know thank you thank

Sarim Aziz:
you for the question and for joining this discussion yeah we have very clear policies against bullying and harassment on our platform across all our surfaces it’s the same policy on Facebook and Instagram and others so we have the same policies applied everywhere so we want to protect people same protections to all all youth all adults as well of course our threshold is much much lower when it comes to kids and youth when we see the type of you know bullying if a minor is involved in that type of situation that we our policies are much more harsher in terms of the the enforcement action that we take as well as like the the strikes against individuals who might be engaged in that behavior we we do a variety of enforcement actions not just sort of stopping the behavior but also some restricting sort of additional sort of abuse from that those types of accounts but of course we have we rely on bullying is a difficult one where I have to say I don’t think I mean it has made progress but I think it’s a difficult one where I compared to CSAM and other areas in terrorism where AI has not been you know completely successful in sort of you know like we don’t have a 99% sort of action rate on that because of the nature of bullying can be so different right and it may be it may not be obvious to a stranger that there’s bullying going on because context is so important you know between the two individuals involved the cultural context so I think the policies are clear we do enforce and we do remove any type of and prevent such kind of content but we largely rely on our human reviewers we have people from around the world including people experts from Nepal who sort of review content in local language and and help us enforce against that but but that type of kind of we do rely also on the community to report because if no one reports it then platforms are not going to know that this is bullying and this is why that that context and intervention including safety partners and civil society partners we have partnerships in many countries with local safety organizations including in Nepal where you know victims of bullying can report such content to local partners who can ensure that meta services take action quickly against that.

Babu Ram Aryal:
More questions? Audience? Online question? Okay we have got one online question. There’s a

Audience:
question from Stacey. What are the current accepted norms for balancing human rights with privacy and security? Are we good at it?

Babu Ram Aryal:
Any specific research person? No, they did not mention. So Sarim and Jutta?

Jutta Croll:
Okay I’m going to take that one first. I wanted already to refer to privacy of children, because I think it’s part of the UNCRC that is the most ambivalent paragraph of the Convention of the Rights of the Child, because children have the right to a privacy of their own. So that also means, and it’s very made very clear in the General Command Number 25, because with digital environment, with digital media, it has become more difficult also for parents to strike that balance between keeping the privacy of the children on the one hand, and that would mean not looking into their mobile phone like Michael had been talking about before, but on the other hand parents also have, it’s their task, their duty to protect their children. So it’s very difficult in the social environment of the children, in the family, to have a balance between their right to privacy and their right to be protected. But also when we look into that regulation, for example, that I’ve been quoting, the EU regulation that is underway, but also in other regards, that it is quite difficult, because at the moment that we are asking for monitoring content, we know that is always an infringement of the privacy of the people that have produced that content or that are communicating. So looking into the private communication of people would be an infringement of their right to privacy, and that would also mean an infringement of the rights of children and young people, because they have that right to privacy as well. And on the other hand, if we don’t do that, how could a platform like Meta or any other platform follow their responsibility and accountability for protecting their users? That is quite, and I do think it’s an equation that doesn’t come to a fair solution. We need to tackle it from different directions to try to find a balance in this way.

Sarim Aziz:
Yeah, just to add to that, I think this is a really important one. I think when you asked the question, it reminded me of the Google case where I think there was a parent who took a sort of a nude photo of their child to send to a doctor during COVID, and I think Google’s AI sort of marked that as really harmful content and reported that situation to law enforcement. So I think there is definitely that balance and the rights of the child versus rights of parents, and that’s an interesting one. But I think I do want to say that Industry’s View also is quite, I think, against this scanning private messages situation, because all the numbers seem to indicate that we don’t need to actually do that. If you look at all the things that I mentioned in terms of prevention and detection, it is based on behavioral patterns. It’s not based on necessarily content. CSAM aside, yes, of course, that requires that to be… I think if we focus our energy on just public surfaces where users come and the kinds of behavior that we are trying to prevent, grooming behavior, I think there’s plenty of opportunity for technology and civil society and experts to focus there. At first, then, you don’t need to break, get into a private messaging. In fact, a good statistic is a Q1 of this year, and I’m only quoting meta numbers. I think the global numbers from platforms is even more. This is just matters number. In Q1 of this year, we sent 1.2 million reports to NECMEC of child-related CSAM material without invading anybody’s privacy. That’s a staggering number. That’s just meta. Again, if you add the other numbers, I think it’s even higher for other platforms. I don’t think we need to go there. I think that it gets into a lot of unwanted side effects that you don’t want. I think if you focus our energy on behavioral patterns, public surfaces, there’s enough opportunity to prevent grooming behavior and keep kids safe.

Babu Ram Aryal:
In previous conversations, Michael mentioned about the privacy, and I said before opening the floor, I said I have a separate question on privacy. Let’s discuss more on privacy. I would like to ask more on privacy. In USA, there was big debate on COPA and CHIPA, Child Online Protection Act and Child Internet Protection Act were largely debated, and those debates went to the Supreme Court and clearly discussed about the child protection is one side, and freedom of adults are different side. So how we can meet the better position, especially from talking from development country perspective like Nepal and Zambia, what kind of legislative framework could be more efficient? Because lots of countries, they don’t have a specific legislation on online child protection. There might be certain provisions on the regular Child Protection Act, but not a very clear position on child protection online issues. Michael, I’ll come to you first to respond on this. What is your experience in Zambian legal regime? How Zambian legislative framework is addressing this kind of issues?

Michael Ilishebo:
So basically, as I wrote earlier on, in 2021, there was a split in terms of amending our Electronic Communication and Transactions Act, which contained both the aspect of cybercrime, cybersecurity, electronic communications, and other legislative issues on ICT. So now, it was more like, I would say, we came up with two more legislations that we separated from the ECT Act. One of them is Cybersecurity and Cybercrimes Act. The second one was the Data Protections Act. So basically, the Data Protections Act covers matters and issues to do with privacy. But of course, privacy is a generic term. At the end of a day, a child who’s 10 years, what privacy do they need when they are under the control of a guardian or a parent? They may not know that which is good, that which is bad, because of their stage of their age and state of mind. Also, coming back to the issues of security and safety, they become vulnerable the moment issues of privacy comes in. If you ask a child to say, let me have your phone, let me see whom you’ve been communicating to, a child would say, I have the right to privacy. What do you do? It’s true, as long as you’ve deemed that child to own a phone, you’ve allowed them to have a bit of some privacy. But again, it also depends on which platform they are using. I will give an example of my kids. My kids back home, for their YouTube channel or any product from Google, I use a family account. That allows me to regulate what app they’re installing. Even if I’m not there, I will receive an email to say, this one wants to install this application. It’s me to either allow it or block it. The same happens to YouTube. So basically, I’ve taken that step because the human oversight, I will not always be there to see what they are doing, but somehow technology will help me through AI to filter and probably bring to my notice on certain things that technology feels like this is above their age. There are some games online that would appear innocent in the eyes of an adult. But as a child keeps playing those games, a lot of bad things, a lot of images that may be of sexual expectations will be introduced in the middle of the game. But when you look at it as an adult, you won’t even see anything. So these providers, like in the case of Google, has a way of knowing which application, either on Play Store or any other platform, which is appropriate for a child. So as a step to protect my kids, I’ve allowed them to use only a child, a family-friendly account where, despite me being in Japan, I’m able to know if they viewed this video, which I may deem to be inappropriate. I will either block it from here or probably talk to them that never ever visit this page. Of course, Microsoft also may come up with their own policies through their browser on blocking certain sites and probably pages or another thing that they may be doing online using their platform. But again, it comes back to the issue of human rights and privacy. To what extent are we able to control our kids? Are we able to control them based on the fact that they are using a single device in a house where this one uses in the morning, this one in the evening, or they’ve got single devices? Or alternatively, we’ve allowed them to use single devices based on the fact that we’ve installed a family-friendly account which enables you as a parent to monitor it. But of course, it’s not always the case because a child is an adventurous person. They always find ways and means to bypass every other control. They seem to know more than we do. The same also applies to crimes where a child is a victim. A child may be groomed by somebody they’re chatting with. They may be told, you place that, place this, place that. They’ll bypass all the controls that you’ve put in place. As much as you’ve put your privacy protections and probably safety laws to how they navigate their online space, there’s a third party out there who is taking control of them and making them do and visit certain things that they’re not supposed to do.

Sarim Aziz:
question. I think this comes back to the prevention aspect of it. I think the last example that Michael just mentioned, we’ve changed our default settings for youth accounts exactly that to prevent any kind of interaction. I think prevention is really a good strategy and focusing, making sure we’re having safety in there by design. This is where AI is helping. On the ongoing debate, as Michael said, I think kids are digital natives in this world. They are good at circumventing all this stuff. But if there’s safety and design into products and services that we use, and we have parental supervision tools as well on Meta’s platforms so parents are aware who they’re communicating with and things like that and what type of content are they interacting with. By default, kids don’t see any advertising on Facebook. Obviously that’s important. At the same time, any content that might be violent or graphically violent or inappropriate is not visible to them. As I said, even for adults it’s disturbing. We do mark that as disturbing for adults so they don’t have to see such content by default. It’s an ongoing discussion, I think, where the solution is safety by design and youth safety by design in products because the kids are sometimes early adopters of these things that come in and making sure that it’s keeping them safe. If we keep them safe, we actually keep everyone safe as well, not just kids.

Jutta Croll:
Yes, I have to respond to one thing that Saurabh said and that is when you say kids don’t see advertisement on Meta, it’s when they have been honest with their age. But when they have been lying on their age, they might see advertisement. We have already been talking about age verification or age assurance. I would just say that it’s key to solve the issue. As long as we don’t know the age, I would say of all users, it’s not only that we need to know whether a child is a child. We also need to know whether an adult is an adult to know that there is an inappropriate communication going on. I’m pretty sure that in the near future we will have privacy saving methodologies to make sure that we know the age of the users to better protect them. But coming back to that question that you raised and posed also to Michael, I could say it’s one sentence. Talk to each other. Parents as well as children have to talk to each other. It’s always better to negotiate what is appropriate for the child to do than to regulate. I do think that the same applies to policy and to platforms and to the regulator. Talk to each other and try to find a constructive solution between the two of them. Yuta, I don’t know whether this is proper to ask you or not. Before you mention about the upcoming legislation of European Union Parliament, can you share some of the domestic practices of European member state about online child protection? Because I wanted to ask that question before but the sequence developed differently. Sorry for that. But if you can share any member state perspective on online child protection. So do we have two more hours to talk about all the strategies that we have in Europe? Of course it’s different in the different countries. We see that, and I think that’s for several years, that countries that start right now or have started to legislate two or three or five years ago have much more focus on the digital environment and how to legislate child sexual abuse in a way that is appropriate to the digital environment. While countries that have longer-standing child protection laws that did not address the digital environment, of course they need to amend their laws, but that takes time. So the newer the legislation, the better. It is fit for purpose to address child sexual abuse in the online environment. What we did in Germany was in 2021 we got an amended Use Protection Act that refers much more to the digital environment than it did before, and that has kind of that approach that I just have been talking about before. It’s called dialogic regulation. It’s not that it poses obligations on the platform providers, but it asks for a dialogue with the platform providers to try to find the best solution. And I think that is much more future-proof than regulating, because you always can regulate only the situation that you are facing in the moment that you are doing the legislation. But we need to look forward, and again I’m referring to the platform providers. You’re in the position to know what is in the pipeline, which functionalities will be added to your service. So if you do safety by design, then as Sonja Lewitt put it just in another session, it should be a child rights-based design, then probably the regulator would not have so much work to do.

Babu Ram Aryal:
Gopal, you want to say something on this?

Ghimire Gopal Krishna:
Though before some time, just what is the right age of adult? This question is now debatable in our context, in Nepalese context. We have now the marriage age is in debate, though before some time we have our parliamentarian, I think she has gone. What is the correct age for marriage? We have a provision that when a child completed 16 years of age, we provide him or her citizenship certificate. That means a type of adult certificate. Citizenship certificate is provided in 16 years of age when he completed, she completed. And a child can be voted for their representative in Nepal when he or she completed 18 years of age. The voting right is to be provided to the person. And third one is, what is the age of marriage? The age of marriage is 20 years. And many cases now in my practicing life, many cases are laws before the court. The rape cases are laws before the court. When, what is the consented age? What is the age of consent? That is in debate. And many people are in jail nowadays. Many people are in jail when they are, if they are consenciously, they indulge themselves before the age of 18. And it is the matter of questionable. That is why it is very important. And our civil society, this matter is now in debate in civil society too. That is, it is very important for we, what is the proper, interestingly, though we have set principles, we have examples. And what is the proper age for marriage? And whether it could be, whether it could be internationally settled, the similar age or not. This is a very important question for we now. That is why I just raised this question to our fellow.

Babu Ram Aryal:
To link this issue, I’ll go to Sarim very briefly. Sarim, especially when the litigation comes and the law enforcement agency see the different age, age group, actors, content, especially like sexual relationship or any kind of other similar kind of content. As he was referring, there are some different legislation that allows relationship between people and there could be use or as an evidence. So this is the debate at different societies. So how easy to deal with this kind of odd situations for platform providers and what are the platform providers response on this kind of issues?

Sarim Aziz:
Yeah, I think these child safety issues are definitely top of mind for our trust and safety teams at Meta and I’m sure for other platforms too. And I think the NECMEC number that I shared earlier is a good sort of proof point of how we cooperate with civil society and law enforcement. Of course, there are some cases where we don’t wait for NECMEC. If it’s a child in imminent danger, we believe that we have child safety teams that look at this stuff and these cases and law enforcement teams that directly reach out to law enforcement in the country. And there’s been cases where we busted sort of these child rings as well. So I think that’s an ongoing effort. I wouldn’t say it’s easy. I think it requires a lot of, I mean, AI has helped in that effort but it still requires human intervention investigation. I think the age verification piece is interesting. As I mentioned that that’s where we are doing some tests and the AI does help because one of the solutions that they’re testing is like, where the youth has to send a video of itself for verification. So I think you can rely on IDs to a certain extent but there are other questions of data collection on that. How many, how much private citizen data are you gonna collect? And then there are other suggestions where you link into government systems but then there are other surveillance sort of concerns on that. So I don’t think there’s a silver bullet here and I don’t think any solution is going to be perfect. We are doing tests with age verification as I said on Instagram. I think we’ll wait and see what results say. There’ll be some level of verification but again, I don’t think anything will be perfect and certainly, but we do need to sort of figure out as I said that we need to figure out whether an adult is an adult and whether a child is a child and especially we have other behaviors that we need to detect of like suspicious behaviors and fake accounts and things like that as well. So that’s where AI is also helping us definitely quite a bit.

Babu Ram Aryal:
Thanks. Follow up, very brief question. You frequently mentioned we report to NECMEC, we send report to NECMEC. NECMEC is based in and are not perfect organization based in the US, right? So what could be your response if other jurisdictions wanted to collaborate with you?

Sarim Aziz:
NECMEC collaborates with law enforcement around the world. So if you are a law enforcement agency in any country, you can reach out to them. I think there’s another organization called ICMEC that they also partner with which is international and they work with law enforcement to set up their cyber tip helpline. So that gives local law enforcement access to that information.

Babu Ram Aryal:
Jutta.

Jutta Croll:
Yes, I just wanted to add something because I’m very grateful for you to bringing in the question of consent because the principle of age of consent has also come under pressure pretty much in the digital environment where the question is what has been consensual and what wasn’t. And the general comment that I have been referring to before says in article 118 that self-generated sexualized imagery that is shared in consensus between young people should not be criminalized. So I think that is a very good provision for young people also need to experience their sexual orientation to learn about each other. But when it comes to these images, AI would not be able to understand whether it was a consensual sharing of images or whether it was not in consent. So that makes it very difficult to apply such a rule like, okay, what is consensual and what is not consensual? And that’s also, as I said before, we can rely on artificial intelligence in very much aspects and I’m pretty sure it will get better and help us better to protect children but there are certain issues where artificial intelligence cannot help and we need to have human interventions. Thank you.

Babu Ram Aryal:
Thank you. Any questions from audience if you have? We have very less time for the session. If you have any question, please.

Audience:
Just for the protection of the right of the child, the companies or the social media groups should be responsible enough for the registration as well as the content and the use of AI to identify the local language so that there will not be any kind of misuse of the word or something else. Just like I would like to give my own example. It was about eight years ago, I got a friend request by putting the name of the one beautiful child. She was about, the picture was seen about 13 or 14 years old and I didn’t accept. She frequently called me at the evening and I ignored that one. And then what happened? I thought that oh, I should tell, I should note that call and then I should tell their parents that what, that your child is doing this thing. Then I asked her to give her, do you have your private number or phone number, something else? I don’t like to talk in the social media. Then what happened that she was not the child. She was some woman that she wanted to have some informal relation with me or someone else. Then I asked her why? Why did you put the name of, picture of the child and the name different thing? And then she said oh, if I put the picture of the child then I would like that one. This is the case I face by myself. Similar things may happen for others also. So that the registration systems in the social media should have some authentication mechanism. Without that there might be some similar cases that might happen to others also. So that my request to the social media agency is to be more accountable, responsible and intelligent enough that our platform is not misused. That is my suggestion as well. Thank you very much.

Babu Ram Aryal:
Thank you. Asirbad.

Audience:
Thank you, Barbara. So I’ll move a little bit to the human side while we’re talking about AI. Very briefly, okay, time is very brief. So young people have been very much experience a lot of peer pressure and in digital area social media has caused it even more. For example, the increasing level of anxiety, depression, body misconcern, disorder, eating and suicidal thought. And so when we look at the root cause of peer pressure, the need to fit in, the fear of rejection and looking at the sense of belonging, those are needed, those are the human aspects. So, and due to that they are very much vulnerable to online exploitation, right? So what is a social media company doing in terms of the human aspect as well along with the technical one? Thank you. Just a short one. My question is also for Sarim. I’m Binod Basnet from Nepal. So recently there has been some message that is circulating in the Facebook messenger that you have been infringing some child protection policies of Facebook. So if you don’t follow some instructions your account will be suspended and stuff like that. And when you go into that message there’s a photo of Meta with the Meta logo and it’s very panicking for young users and they tend to give in their contact details and their ID and the password and stuff like that. But I think in reality they are phishing sites that are seeking your passwords. So my question is what is Facebook doing in regards to those phishing hackers? What is the retaliating or what is the policy that Facebook takes against those phishing sites? Thank you.

Babu Ram Aryal:
Thanks. Though our original time is almost about to conclude but questions are coming. So Sarim, directly to you.

Sarim Aziz:
I mean the question’s all directed at me. I’m happy to continue the conversation after this discussion. But look, on all fronts. With your concluding remarks. Yeah, concluding remarks would be, I think I’m gonna go back to my introduction to say this is not just a platform that can solve all the solutions. I think we have technology that can assist. Technology still requires human expertise both from the platforms but also expertise from civil society, law enforcement and government and parents and families. So phishing is a longstanding issue. I think the smartest people in this room have been phished and it has, whether that’s from Meta or some other logo that they recognize. I think it’s a matter of like when you’re short of time and your attention span is shorter you could be phished very easily. So I think that’s an issue where we need to increase digital literacy and education and actually from a systems perspective the way you fix that is you have authentication. One time authentication so that even if someone does get phished that they’re not able to, your credentials don’t, one time authentication will prevent phishing attacks from getting access to the system. So that’s, I think systemically needs to change. So that’s a separate issue. Absolutely, I think in terms of safety we Meta cannot alone solve these issues. I think in terms of the human aspect we work with the 400 safety advisors we have but there are other organizations that we’re members of the We Protect Alliance as well and other organizations where us with along with industry we want to protect kids and I mentioned earlier on how we are using our platforms and using AI to detect and educate kids on when there’s potential grooming or potential unwanted interactions to prevent kids from interacting with adults. So those are some efforts but there’s a lot more that we can do and we’re open to ideas. The gentleman who mentioned that I think he was maybe phished or maybe there was some other attempt to connect with him. Of course, I mean, we also rely on community. I think some of the challenges we have is people don’t report. I think they think that platforms have the manpower or the ability to just know when something’s wrong. We won’t until people, civil society and users report things to us and that’s where we rely on our partners so that’s really key when these kinds of situations happen to protect yourself but also your community.

Babu Ram Aryal:
Very closing one minute, less than one minute response from Michael.

Michael Ilishebo:
So basically, we will not address any of the challenges we’ve discussed here if there’s no close collaboration between the private sector, between the governments and the public sector and also the tech companies. So in as much as we can’t put trust in AI to help us policy cyberspace, the human factor and close collaboration will be the key to addressing most of these challenges. Thank you.

Babu Ram Aryal:
Thank you, Jutta.

Jutta Croll:
Yes, thank you for giving me the opportunity for one last statement. I would like to refer to Article 3 of the UN Convention of the Rights of the Child that states the principle of the best interest of the child and in any decision that we may take, that policy makers may take, that platform providers may take, just consider whether this is in the best interest of the child and that means the individual child if you are talking about a case like we have heard of some of these cases but all children at large. Also, consider whether the decision that shall be taken, the development that shall be made, the technological invention that shall be made, is it in the best interest of the child and then I do think we will achieve more child protection.

Babu Ram Aryal:
Thank you. Closing.

Ghimire Gopal Krishna:
We are the part of Child Right. We already signed in the Child Right Protection Treaty so being the part of very responsible part of society, being the president of Nepal Bar Association, I am committed to always in favor of Child Right Protection Acts and its amendments, its possible subject, I mean positive amendments and so on. So I’m very thankful to Bauram to give me this opportunity.

Babu Ram Aryal:
Thank you very much. Thank you very much. We are running out of time. I’d like to thank my panelists, my team of organizers and all of you who actively participated on this and of course for the benefit of a child, dedicated to these childs. I would like to thank all of you and I close this session. We hope we’ll have a good report of this discussion and we’ll be sharing the report to all of you through our channel. Thank you very much.

Babu Ram Aryal

Speech speed

114 words per minute

Speech length

1678 words

Speech time

880 secs

Sarim Aziz

Speech speed

188 words per minute

Speech length

5241 words

Speech time

1670 secs

Audience

Speech speed

165 words per minute

Speech length

776 words

Speech time

283 secs

Ghimire Gopal Krishna

Speech speed

113 words per minute

Speech length

1016 words

Speech time

538 secs

Jutta Croll

Speech speed

140 words per minute

Speech length

2520 words

Speech time

1081 secs

Michael Ilishebo

Speech speed

156 words per minute

Speech length

2394 words

Speech time

921 secs