Protecting children online with emerging technologies | IGF 2023 Open Forum #15
Event report
Speakers and Moderators
Speakers:
- Hui Zhao, China Federation of Internet Societies, Civil Society, Asia-Pacific Group
- Dora Giusti, UNICEF China, Intergovernmental Organization
- Eleonore Pauwels, UNICEF, Intergovernmental Organization (online)
- André F. Gygax, the University of Melbourne, Civil Society (online)
- Zhu Xiong, Tencent, Private sector, Asia-Pacific Group
- Tetsushi Kawasaki, Gifu University, Civil Society, Asia-Pacific Group
- Pineros Carolina, Red PaPaz, Civil Society, Latin American and Caribbean Group
- Nkoro Ebuka, Kumoh national institute of technology of Korea, Civil Society, Asia-Pacific Group
Moderators:
- Rui Li, UNICEF China
- Xiuyun Ding, China Federation of Internet Societies
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Moderator – Shenrui LI
During the discussion on protecting children online, the speakers placed great emphasis on the importance of safeguarding children in the digital space. Li Shenrui, a Child Protection Officer from UNICEF China Council Office, highlighted the need for collective responsibility among various stakeholders, including governments, industries, and civil society, in order to effectively protect children from online harms. Shenrui stressed that it is not enough to rely solely on policies; education and awareness are also crucial elements in ensuring children’s safety online.
China is dedicated to leading the way in creating a safe digital environment for children globally. The Chinese government has introduced provisions to protect children’s personal information in the cyberspace. Additionally, the country has organised forums on children’s online protection for consecutive years, demonstrating their commitment to addressing this issue.
Xianliang Ren further contributed to the discussion by highlighting the importance of adaptability in laws and regulations for addressing emerging technologies. Ren recommended regulating these technologies in accordance with the law and suggested that platforms should establish mechanisms such as ‘kid mode’ to protect children from inappropriate content. This highlights the need for clear roles and responsibilities in the digital space.
Improving children’s digital literacy was also identified as a crucial aspect in protecting them online. The importance of education in equipping children with the necessary skills to navigate the digital world effectively was acknowledged.
The discussion also highlighted the significance of international cooperation in addressing the issue of children’s online safety. China has partnered with UNICEF for activities related to children’s online safety, demonstrating their commitment to working together on a global scale to protect children.
In conclusion, the discussion on protecting children online emphasised the need for collective responsibility, adaptable laws and regulations, improved digital literacy, and international cooperation. These recommendations and efforts aim to create a safe and secure digital environment for children, ensuring their well-being in the increasingly connected world.
Patrick Burton
Emerging technologies offer both opportunities and risks for child online protection. These technologies, such as BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and machine learning-based redirection programs for potential offenders, have proved valuable in combating online child exploitation. However, their implementation also raises concerns about privacy and security. Potential risks include threats to children’s autonomy of consent and the lack of accountability, transparency, and explainability.
To address these concerns, it is crucial to prioritize the collective rights of children in the design, regulation, and legislation of these technologies. Any policies or regulations should ensure the protection and promotion of children’s rights. States have a responsibility to enforce these principles and ensure that businesses comply. This approach aims to create a safe online environment for children while harnessing the benefits of emerging technologies.
The implementation of age verification systems also requires careful consideration. While age verification can play a role in protecting children online, it is essential to ensure that no populations are excluded from accessing online services due to these systems. Legislation should prevent the exacerbation of existing biases or the introduction of new ones. Recent trends indicate an increasing inclination towards the adoption of age verification systems, but fairness and inclusivity should guide their implementation.
Additionally, it is important to question whether certain technologies, particularly AI, should be built at all. Relying solely on AI to solve problems often perpetuated by AI itself raises concerns. The potential consequences and limitations of AI in addressing these issues must be carefully assessed. While AI can offer valuable solutions, alternative approaches may be more effective in some situations.
In summary, emerging technologies present both opportunities and challenges for child online protection. Prioritizing the collective rights of children through thoughtful design, regulation, and legislation is crucial to leverage the benefits of technology while mitigating risks. Age verification systems should be implemented in a way that considers biases and ensures inclusivity. Moreover, a critical evaluation of whether certain technologies should be developed is necessary to effectively address the issues at hand.
Xianliang Ren
There is a global consensus on the need to strengthen online protection for children. Studies have revealed that in China alone, there are almost 200 million minors who have access to the internet, and 52% of minors start using it before the age of 10. This highlights the importance of safeguarding children’s online experiences and ensuring their safety in the digital world.
In response to this concern, the Chinese government has introduced provisions for the cyber protection of children’s personal information. Special rules and user agreements have been put in place, and interim measures have been implemented for the administration of generative artificial intelligence services. These efforts are aimed at protecting the privacy and security of children when they engage with various online platforms and services.
There is a growing belief that platforms should take social responsibility for protecting children online. It is suggested that they should implement features like kid mode, which can help create a safer online environment for young users. By providing child-friendly settings and content filters, platforms can mitigate potential risks and ensure age-appropriate online experiences for children.
Additionally, it is argued that the development and regulation of science and technologies should be done in accordance with the law. This calls for ethical considerations and responsible practices within the industry. By adhering to regulations, technological innovations can be harnessed for the greater good while avoiding potential harm or misuse.
Improving children’s digital literacy through education and awareness is seen as crucial in tackling online risks. Schools, families, and society as a whole need to work together to raise awareness among minors about the internet and equip them with the knowledge and skills to recognize risks and protect themselves. This can be achieved by integrating digital literacy education into school curricula and empowering parents and caregivers to guide children’s online experiences.
Furthermore, it is important for the internet community to strengthen dialogue and cooperation based on mutual respect and trust. By fostering a collaborative approach, stakeholders can work together to address the challenges of online protection for children. This includes engaging in constructive discussions, sharing best practices, and developing collective strategies to create a safer digital environment for children.
In conclusion, there is a consensus that online protection for children needs to be strengthened. The Chinese government has introduced provisions for the cyber protection of children’s personal information, and there is a call for platforms to implement features like kid mode and take social responsibility. It is crucial to develop and regulate science and technologies in accordance with the law, improve children’s digital literacy through education, and promote dialogue and cooperation within the internet community. By taking these steps, we can create a safer and more secure online environment for children worldwide.
Mengyin Wang
Tencent, a prominent technology company, is leveraging technology to ensure the safety of minors and promote education. With a positive sentiment, Tencent places a strong emphasis on delivering high-quality content and advocating for the well-being of minor internet users. In line with their mission and vision, the company has initiated several key initiatives.
In 2019, Tencent launched the T-mode, a platform that consolidates and promotes high-quality content related to AI, digital learning, and positive content. This initiative aligns with Goal 4 (Quality Education) and Goal 9 (Industry, Innovation, and Infrastructure) of the Sustainable Development Goals (SDGs). The T-mode platform aims to provide a safe and valuable online experience for minors by curating content that meets strict quality standards.
To promote education and inspire learning, Tencent has taken significant steps. They released an AI and programming lesson series, offering a free introductory course to young users. This initiative aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. The course is designed to cater to schools with limited teaching resources and aims to reduce educational inequalities.
Tencent has also partnered with Tsinghua University to organize the Tencent Young Science Fair, an annual popular science event. This event aims to engage and inspire young minds in science and aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. Through interactive exhibits and demonstrations, the fair encourages the next generation to explore the wonders of science and fosters a love for learning.
In addressing the protection and development of minors in the digital age, Tencent has harnessed the power of AI technology. They compiled guidelines for constructing internet applications specifically designed for minors based on AI technology. This shows Tencent’s commitment to creating safe and age-appropriate digital environments for young users. Additionally, Tencent offered the Real Action initiative technology for free to improve the user experience, including children with cochlear implants. This initiative aligns with Goal 3 (Good Health and Well-being) and Goal 9 (Industry, Innovation, and Infrastructure) of the SDGs.
In conclusion, Tencent’s initiatives in ensuring minor safety online and promoting education demonstrate their commitment to making a positive impact. Their focus on providing high-quality content, offering free AI and programming lessons, organizing the Tencent Young Science Fair, compiling guidelines for internet applications, and enhancing accessibility for individuals with cochlear implants showcases their dedication to the protection and development of minors in the digital age. Through these initiatives, Tencent is paving the way for a safer and more inclusive online environment for the younger generation.
DORA GIUSTI
The rapidly evolving digital landscape poses potential risks to children’s safety, with statistics showing that one in three internet users are children. This alarming figure highlights the vulnerability of children in the online world. Additionally, the US-based National Center for Missing and Exploited Children reported 32 million cases of suspected child sexual exploitation and abuse in 2022, further emphasizing the urgent need for action.
To protect child rights in the digital realm, there is a pressing need for increased cooperation and multidisciplinary efforts. The emerging risks presented by immersive digital spaces and AI-facilitated environments necessitate a collective approach to address these challenges. The UN Committee on the Rights of the Child has provided principles to guide efforts in safeguarding child rights in the ever-changing digital environment. By adhering to these principles, stakeholders can ensure the protection of children and the upholding of their rights online.
In addition to cooperation and multistakeholder efforts, raising awareness and promoting digital literacy are crucial in creating a safer digital ecosystem for children. Educating children about the potential risks they may encounter online empowers them to make informed decisions and stay safe. Responsible design principles that prioritize the safety, privacy and inclusion of child users should also be implemented. By adhering to these principles, developers can create platforms and technologies that provide a secure and positive digital experience for children.
The analysis highlights the urgent need for action to address the risks children face in the digital landscape. It underscores the importance of collaboration, guided by the principles set forth by the UN Committee on the Rights of the Child, to protect child rights in the digital world. Furthermore, it emphasizes the significance of raising awareness, promoting digital literacy, and implementing responsible design principles to ensure the safety and well-being of children online. Integrating these strategies will support the creation of a safer and more inclusive digital environment for children.
ZENGRUI LI
The Communication University of China (CUC) has made a significant move by incorporating Artificial Intelligence (AI) as a major, recognizing the transformative potential of this emerging technology. This integration showcases the university’s commitment to preparing students for the future and aligns with the United Nations’ Sustainable Development Goals (SDGs) of Quality Education and Industry, Innovation, and Infrastructure.
In addition to integrating AI into its programs, CUC has also established research centers focused on exploring and advancing emerging technologies. This demonstrates the university’s dedication to technological progress and interdisciplinary construction related to Internet technology.
CUC has also recognized the importance of protecting children online and the need for guidelines to safeguard their well-being in the face of emerging technologies. It is suggested that collaboration among government departments, scientific research institutions, social organizations, and relevant enterprises is crucial in establishing these guidelines. CUC’s scientific research teams have actively participated in the AI for Children project group, playing key roles in formulating guidelines for Internet applications for minors based on AI technology.
The comprehensive integration of AI as a major and the establishment of research centers at CUC reflect the university’s commitment to technological advancement. It highlights the importance of recognizing both the benefits and risks of emerging technologies and equipping students with the necessary skills and knowledge to navigate the digital landscape responsibly.
Overall, CUC’s initiative to integrate AI as a major and its involvement in protecting children online demonstrate a proactive approach towards technology, education, and social responsibility. The university’s collaboration with various stakeholders signifies the importance of interdisciplinary cooperation in addressing complex challenges in the digital age.
Sun Yi
The discussion revolves around concerns and initiatives related to online safety for children in Japan. It is noted that a staggering 98.5% of young people in Japan use the internet, with a high rate of usage starting as early as elementary school. In response, the Ministry of Internal Affairs and Communications has implemented an information security program aimed at educating children on safe internet practices. The program addresses the increasing need for online safety and provides children with the necessary knowledge and skills to navigate the online world securely.
Additionally, the NPO Information Security Forum plays a crucial role in co-hosting internet safety education initiatives with local authorities. These collaborative efforts highlight the significance placed on educating children about online safety and promoting responsible internet usage.
However, the discussions also highlight challenges associated with current online safety measures in Japan. Specifically, concerns arise regarding the need to keep filter application databases up-to-date to effectively protect children from harmful content. Moreover, the ability of children to disable parental controls poses a significant challenge in ensuring their online safety. Efforts must be made to address these issues and develop robust safety measures that effectively protect children from potential online threats.
On a positive note, there is recognition of the potential of artificial intelligence (AI) and big data in ensuring online safety for children. The National Institute of Advanced Industrial Science and Technology (AIST) provides real-time AI analysis for assessing the risk of child abuse. This highlights the use of advanced technology in identifying and preventing potential dangers that children may encounter online.
Furthermore, discussions highlight the use of collected student activity data to understand learning behaviors and identify potential distractions. This demonstrates how big data can be leveraged to create a safer online environment for children by identifying and mitigating potential risks and challenges related to online learning platforms.
To create supportive systems and enhance online safety efforts, collaboration with large platform providers is essential. However, challenges exist in collecting detailed data on student use, particularly on major e-learning platforms such as Google and Microsoft. Addressing these challenges is crucial to developing effective strategies and implementing measures to ensure the safety of children using these platforms.
In summary, the discussions on online safety for children in Japan emphasize the importance of addressing concerns and implementing initiatives to protect children in the digital space. Progress has been made through information security programs and collaborative efforts, but challenges remain in keeping filter applications up-to-date, configuring parental controls, and collecting detailed data from major e-learning platforms. The potential of AI and big data in enhancing online safety is recognized, and future collaborations with platform providers are necessary to create safer online environments for children.
Session transcript
Moderator – Shenrui LI:
Okay, hello everyone, excellence, ladies and gentlemen, and also young, young, young people Friends, because I saw there are some children also joining us for this session Welcome all to the Internet Governance Forum 2023 Open Forum No. 15 Protecting Children Online with Emerging Technologies My name is Li Shenrui, I’m from UNICEF China Council Office as a Child Protection Officer It’s my honor to welcome you as the moderator of this session on behalf of the China Federation of Internet Societies, UNICEF China, and Communication University of China to convey the warm greetings to all of you together to this important forum And a big thank you for being here today And today in this session we will discuss the most trendy topics around protecting children with emerging technologies As many of you may know that two years ago ago, the unions have released a policy guidance 2.0 on AI for children globally. It’s a global policy guidance for governance and industry. So the conversations kept going on in the last two years on how to protect children online and how to adjust our policy actions, practices, not only from the government side, but also from the industry and from the social civil society side to engage and leverage resources to protect our children. So taking this opportunity, we have guests and guest speakers with various backgrounds, and we will share more on their insights around this topic. So without further ado, let’s welcome our honored guest, Mr. Ren Xianliang, the Secretary General of the World Internet Conference and the President of China Federation of Internet Societies, to give us opening remarks. Please, welcome. REN XIANLIANG, SECRETARY GENERAL, WORLD WIDE INTERNET CONFERENCE
Xianliang Ren:
Ladies and gentlemen, I am pleased to attend the UN Internet Governance Forum in 2023, which is a forum on the protection of children’s Internet security with new technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. Ladies and gentlemen and friends, it’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. It’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. In today’s world, technologies like AI, big data, the Internet of Things are everywhere. They have a huge impact on our lives and raise new issues for Internet governance, especially when it comes to protecting children online. On one hand, the Internet is an important tool for children to learn and communicate. On the other hand, it brings risks like harmful content, addiction, fraud, and privacy breaches. There is a global consensus that we need to strengthen online protection for children. Studies show that in China alone, there are almost 200 million minors who have access to the Internet. The age of the first exposure is getting younger too, with 52% of the minors Internet users before the age of 10. That’s why the Chinese government needs to strengthen online protection for children. The Chinese government and society have taken this issue seriously. The government has introduced provision on the cyber protection of personal information of children, which require operators set up special rules and user agreement for the protection, and interim manners for the administration of generative artificial intelligence services, which make sure that generative AI, like how it works and what data it uses, is regulated. After regulation on the online protection of manners, and a dedicated chapter to cyber protection in the new law on protection of manners, make sure kids are protected when they are online. Special efforts have been made to clean up the online environment. Platforms have taken social responsibility by implementing features like kid mode and As a social organization, World Internet Conference and China Federation of Internet Societies are actively involved in children’s online protection too. WIC Wuzhen Summit has held forums on children’s online protection for consecutive years, and CFIS has collected with UNICEF to host or participate in activities related to children’s online safety at IGF, collecting cases of AI for children and promoting them globally. These efforts have yielded positive results. To protect children’s online security with emerging technologies, we need to communicate more, build consensus, and take collective action. Here, I’d like to share three suggestions. First, we should regulate emerging technologies in accordance with the law. 建立完善相关的法律法规,发挥法制对新技术应用、新业台发展的引领、规范、保障作用,依法规范新型技术应用场景。 It is important to establish and improve laws and regulations related to the application and development of emerging technologies. This will ensure that these technologies are used responsibly and in a way that safeguards everyone’s interests. 建议政府部门强化监管、持续开展各类整治行动,纠正各种网络乱象,行使儿童网络安全防护墙。 I recommended that government departments enhance insuperation, continue to take collective measures and crack down on bad online behaviors, to strengthen the role of production for children’s online security. 二是,推动科技向上、向善。 Second, we should make sure science and technologies are developed to do good. 网站平台作为各类应用服务的提供者,要强化主体责任,建立健全青少年模式,防沉迷机制,举报处置机制等,防范打击侵害儿童合法权益的内容和行为。 鼓励倡导企业加强儿童网络保护的技术研发,以技术对技术提高儿童网络保护的防疫能力。 三是,提升数字素养。 Third, it’s crucial to improve children’s digital literacy. 推进学校、家庭和全社会共同参与加强未成年人网络素养的宣传教育,提升儿童防范风险意识和自我保护能力,提升学校、家长以引导规范儿童安全上网用网的能力素养。 Schools, families and society as a whole should work together to raise awareness and educate minors about the Internet. equipped with knowledge and skills to recognize risk and protect themselves. In addition, schools and parents should be more prepared to guide children through internet use. Social organizations and research institutions should utilize social and industrial resources and work on ethical governance of emerging technologies. These include establishing machines for ethical review and certification. We should also develop cross-regional and cross-platform cooperation to study and solve the problems of black market and hidden network disruptions of children. We should jointly create a network space for children to grow healthily. Last but not least, I suggest that the internet community strengthen dialogue and cooperation based on mutual respect and trust. We could not tackle difficult issues such as illegal industry targeting children and hidden cyber threats without cooperation across regions and platforms. Together, we can build a community with a shared future in cyberspace that fosters the healthy growth of children. We will continue to make dedicated efforts towards this goal and contribute to a better and safer cyber world for children. I wish this forum great success. Thank you.
Moderator – Shenrui LI:
Okay, thanks to Mr. Ren for during the opening remarks. It’s always thrilled to see that China dedicates to be a pioneer, to explore and lead the positive pathways towards enabling and safe digital environment for children globally, while emphasizing, as Mr. Ren mentioned, the adaptability of laws and regulations, and also the clear roles and responsibilities of different sectors, including industrial and social science sectors, and also improving the children’s digital literacy. While we’re glad to see that China keeps seeking opportunities on international cooperation among this important topic, and we hope to unpack those suggestions later in our discussion today. Next, let’s welcome Mr. Li Zengrui, the Deputy Director of the Council of the Communication University of China. Let’s welcome.
ZENGRUI LI:
Distinguished Mr. Ren Xianliang, Ms. Dora, ladies and gentlemen from around the world, good afternoon, good evening, good morning. I’m very pleased to participate in this open forum with the theme, Protecting Children Online with a Major Technology. First of all, please allow me to represent Communication University of China, or CUC, one of the organizers of this forum, to warmly welcome all experts and scholars for your attendance. Thank you for your attention to the topic of children online protection. With rapid development of Internet, the wave of digital technology and information networking has swept the world. By June 2023, netizens in China had outnumbered 1 billion, about 20 percent of which are adolescents and students. Taking the largest proportion, the popularity of the Internet has enabled children more access to reach out emerging technologies and further use them. The major technologies not only bring great convenience to children’s education, health and entertainment, but also arise people’s concern for policy protection and fairness. CUC has always valued the integration of disciplinary construction related to Internet technology, technological progress and social responsibility, and has deepened academic accumulation in the intelligence media network. A number of emerging technology-related research centers have also been established, including State Key Laboratory of Media Convergence and Communication, Key Laboratory of Intelligent Media of the Ministry of Education, Key Laboratory of Audiovisual Technology and the Intelligent Control System of the Ministry of Culture and Tourism. In addition, the School of Information and Communication Engineering has set up AI as a major to cultivate compound senior talent for AI-related scientific research, design, and development, and integrated applications in fields such as information, culture, radio, and television, and the media industry. With the strengthening of the inheritance of academic and social research and the vantage of amazing Internet technology, and the invitation of CFIS and UNICEF, one of the scientific research teams from the CUC joined the AI for Children project group. As a key member, our team conducted in-depth research on the application of AI for children and participated in the formulation of guidelines for the construction of Internet applications for minors based on AI technology. Different from traditional Internet applications, the Internet applications driven by emerging technologies introduce intelligent technologies such as machine learning, deep learning, natural language processing, and knowledge graphs. The use of these technologies helps to provide more well-being for children, such as health monitoring of children, recommendation of quality content, company of special group. However, emerging technologies also bring many risks to children. such as unfairness, data policy security, and internet education. Therefore, stakeholders such as government departments, scientific research institutions, social organizations, and relevant enterprises should deepen exchanges, enhance consensus, and strengthen cooperation, and found guidelines and rules of global common development of protecting children online with emerging technologies, so as to promote the health development of emerging technologies, and better benefit people around the world. I hope that through exchange of this open forum, we can all get inspiration from the application of emerging technologies for children, and contribute to the development and application of emerging technologies in the children-related field. At the end, I hope this open forum will be success, and promote global awareness of children online protection. Thank you very much. Thank you.
Moderator – Shenrui LI:
Okay, thank you Mr. Li for sharing, and also for expressing the commitment of the CUC on generating more evidence on child online protection. It was good cooperation between CUC and UNICEF China on working on the documentation of AI for children cases. We definitely hope to see more of those collaboration joined. Please let us welcome Mr. Patrick Burden, the Child Online Protection Consultant to share about the key considerations in. in regulating emerging technologies for protection of children. So the floor is yours, Patrick.
Patrick Burton:
Thank you very much. Can I just check that everybody can see my screen? Sound and clear, please. Perfect. Thank you. Sorry. Give me a second. I just need to turn translation off, but I’ve got an echo. There we go. Hopefully that will be better. So thank you very much, Chairperson, Secretary-General, colleagues, fellow speakers, experts, participants in the room, friends that I know are there. Thank you so much for the opportunity to speak to you and for convening this forum in the first place. So it’s difficult to watch or to read the news these days, obviously, without hearing about AI, the impact of artificial intelligence or digital technology on children’s lives. Often this is phrased in negative terms, for example, the impact of screen time, as problematic as that phrase is, unless it’s impact on children’s concentration and well-being or on the escalating reports of child sexual abuse material or exposure to explicit images by children or sometimes the tragic results of cyberbullying that children are experiencing. And I think this is only surpassed perhaps by the growing attention on the impact of AI and emerging technologies specifically, not least in feeding these risks and in exacerbating and catalyzing harmful outcomes for children. Yet, as the title of this forum suggests, at the same time, that same technology can certainly offer a wealth of opportunities, many of which have already been alluded to by the previous speakers in the right context with the appropriate oversight, regulation and design to mitigate some of the potential for harms that the underlying fabric and construction algorithms and machine learning introduced for escalating into children’s everyday use of digital technology. These range from the use of predictive analytics and behavioral models for prevention, deterrence and response to cyber bullying, child sexual offending and other risks and to the use of machine learning and deep neural networks for scanning and hashing of child sexual abuse material. And each of these offers exciting and important guardrails to emerging adaptation of risks that exponentially changing technology and this rapidly changing speed of technology introduced into children’s lives. Now, I’ll just touch on a couple of examples of how digital technology, of how emerging technology using AI in different forms are being used to keep children safe online. Many of you, I’m sure, will have heard of some of these. BORN’s child sexual abuse material classifier is a machine learning based tool that can find new or unknown child sexual abuse material in both images and videos. When potential CSAM is flagged for review and the moderator confirms the decision, the classifier learns it. Now, it continually improves from those decisions and those moderator reviews in a feedback loop and it’s significant in that it uses AI to generate a departure from existing child sexual abuse material mechanisms which depend on existing reports, existing in databases, existing databases using hashing and matching technology. Rather, it detects new and unknown or unclassified child sexual abuse materials. That’s just one example. Another example, which is somewhat different but so important and often overlooked is the use of AI to support children in responding, dealing with issues they encounter online. The example I’ve got here is somebody, a Finnish and Swedish example, which has been developed to support children and adolescents who have potentially experienced online harassment. And often the chatbot, through which cases are analyzed and what it calls the first aid kit, are offered to children with step-by-step guidance on how to deal with each situation on a case-by-case basis. Importantly, it also has a mechanism to review by legal experts, ensuring that the safety and the child-friendliness of the system is ensured through constant human oversight, something which I touch on again later. The third example that I’d like to give is somewhat different from the previous examples, and something which I think we are only starting to pay enough attention to, and that’s looking at deterrence and behavior change for potential offenders. The redirection programs, and there’s a similar initiative out of the UK, uses machine learning to offer self-help programs to prevent child sexual offending, specifically through focusing on deterrence to use child sexual abuse material. It constantly and iteratively learns from information and data shared by users, and importantly is transparent in the collection and use of this data. Like the previous example, the somebody initiative, it’s also subject to oversight and training from human operators. Similar initiatives use predictive analytics to promote behavior change and help seeking among child sexual abuse offenders. Those are just three out of a multitude of examples of how emerging technology is being used, practical examples to keep children safe online. Yet, as much as these technologies in keeping children safe offer immense opportunities, so do these technologies themselves introduce risks to children. They’re not necessarily new risks, but rather new or exacerbated manifestations of existing risks that digital technologies present in children’s lives. These risks pose important questions for how the tech is designed, how it’s regulated, and how it’s legislated. For example, a couple of key questions that you need to take into account when thinking about this. What data is used for machine learning? How is it collected? What biases might it introduce into operations? How are these biases mitigated? How is data collected? Where is it stored? Who has access to it? intentionally and unintentionally. And what’s the purpose of that access to the data? Predictive models, machine learning required immense amounts of children’s data, the collection of storage, which might introduce new risks into children’s lives, might introduce new privacy and security risks for children. There are a number of ethical dilemmas around this. To what degrees are approaches such as predictive analytics and nudge techniques, when applied using AI, allowing for the personal freedom of choice, the autonomy of decision-making, rather than manipulating users. Particularly if those users, those children are not aware of the facts or fully understand the facts, how that technology is being used, how the data is being used, or how an intervention is being applied. And somewhat related to this is the ring fencing of data that is collected and used to inform these models for purposes of the minimization of purpose and use. Now, just, I think the moderator introduced or made reference to a couple of documents that UNICEF has produced, both the model legislation policy for AI and also UNICEF Innocenti have produced a number of papers that highlight some of these challenges. Just to carry on. Risks to children’s autonomy of consent. Technology deployed to detect new child sexual abuse material or grooming, for example, using classifiers, such as those provided in the example before, would not necessarily be able to differentiate between consensual sexual conversations or image sharing between two adolescents of a legal age in that jurisdiction, on the one hand, and otherwise unknown and unhashed child sexual abuse material, potentially introducing risks and biases to those children. Related to this, what are the underlying assumptions that underpin those algorithms? or the machine learning or what is age appropriate, contextually appropriate, culturally appropriate, consensual behavior, and how are those differences by context, by region, by location taken into account. What about the lack of accountability, transparency, and explainability? Machine learning systems are making decisions related to data and algorithmic determination. How and when are these decisions and explained to children in a way that they understand or to their parents as well? And do they detract from individual decision making? There are many more perceptual hashing potential for false positives. Some of these risks are more applicable to some forms of emerging technology than others and in particular uses compared to others, but most are common to some degree across the different forms of technology that use machine learning and deep learning. I don’t have five days, so I’m just going to draw attention to some of the key issues around regulation and particularly around addressing some of these challenges that the use of emerging technologies pose. I say I don’t have five days because this is a challenge that countries throughout the world and regions throughout the world are battling with and I think while we have some really good promising examples and some good examples relating to some of the challenges in legislation, it is an evolving conversation and something where I think, you know, it’s going to take us a while to get this framing and the regulatory and policy environment really sound in order to protect the collective rights of children. And I’m starting with the protective rights of children because underlying any legislational policy has to be an assurance that all technology and regulation are used in the mandate to protect and ensure that collective, equal, indivisible, and inseparable rights of children rather than prioritizing one right over the other. That means anticipating many of the potential unintended consequences that that technology might have down the road on children’s rights, collective rights. It ranges from the obligations of due diligence by industry, designing and implementing that technology to anticipate and address adverse effects on the rights of the child, to the responsibility of states to ensure that businesses adopt and adhere to these principles and are held accountable, and also to ensuring that states themselves respect and adhere to these principles and its mandates. Now, these are enshrined in the Convention on the Rights of the Child. They’re enshrined or they are certainly contained in the general comment number 25 and emerging sort of global guidance and treaties and instruments that have been designed to look at the protection of children’s rights. And a couple of more recent pieces of legislation and policy frameworks are starting to incorporate these effectively, and the Australian Online Safety Act, the UK Online Safety Bills, which is addressing this to some degree, and I say to some degree, the EU DSA, and the recent draft directive regulations that explicitly address the need to anticipate and detect online harms before they occur. What’s interesting, the recent EU directive calls for relevant judicial bodies to ensure that technology companies objectively and diligently assess, identify and weigh on a case-by-case basis, which is critical, not only the likelihood and seriousness of the potential consequences of services being misused for the types of online child sexual abuse at issue, but also the likelihood and seriousness of any potential negative consequences for other parties affected. One of the things I don’t have on the slide here that is also critical, that is contained in EU legislation as well as Australian legislation, at least, I’m sure it’s in others, require the importance of requiring third party independent and public annual audits to assess the impact on child rights as detailed in the CRC and general comment number 25. Moving on, some more examples. If age verification is to be adopted and most recent pieces of legislation are pointing to that contained in various EU documents contained in the draft UK Online Safety Bill, in Australian legislation, if age verification is to be adopted, and I’m saying if because we can’t say that age verification is perfect yet, it is not where it should be in order to function effectively, it is very likely to get there, then significant steps will need to be taken prior to its implementation. To ensure that the child population is equitably equipped with required identification or whatever is required in order to be able to verify the age and that certain populations are not excluded from that. So we need to make sure that age verifications do not reinforce existing biases or introduce new biases or exclusionary practices. Okay, Patrick, sorry to interrupt. I had to interrupt you but we are running out of time, so probably you could wrap up within one minute, please. I will wrap up within one minute. I’ve already spoken, almost there. I’ve already spoken about AI oversight bodies. Importantly, with attached mechanisms for redress and that’s just something I’ve got to say, we know from speaking to children throughout the world, one of their major concerns is that when they make reports or when AI is used or when automated report systems are used that there’s no response. We need to make sure there’s accountability for those responses. And then we need to make sure that regulation is designed in a way or policies are designed in a way that are not limited to existing emerging technologies that will provide but rather provide scope for future developments and definitions. The very last point I’d like to make, this is a. quote from recent paper by Amanda Lennard and Coddy Goins on common myths and evidence, and she makes a point that sometimes some technology cannot be fixed by more design. We cannot necessarily design our way out of problems. Sometimes those technologies should not be built at all. And I guess my final comment is, do we and can we and will we rely on emerging technologies and AI to fix the problems that often result from AI in the first place? Do we rely on AI to create the internet that we want? And that’s perhaps a question more than an answer. Thank you and apologies for going over time.
Moderator – Shenrui LI:
Okay, thank you, Patrick, for your thoughtful sharing. And we all know that is never an easy question to answer on this topic. And we’re all devoting to find the fine balance between the trade-off and against on-child online protection. We definitely want to hear from you more in future. But next, let’s welcome Professor Sun Yi from Kobe Institute of Computing Graduate School of Information Technology to share his thoughts on this topic. Please.
Sun Yi:
Okay. Good afternoon, everyone. Thank you for the Unicef China and CFS and CFC giving me the opportunity to share my experience and here. My name is Sun Yi. I’m Chinese, but I live in Japan more than 20 years. And now I’m Associate Professor of Graduate School Information Technology at the Kobe Institute of Computing. Today I want to share some of my personal experience of the internet safety technology for children in Japan. Next slide, okay. And yes, and first I want to share the internet use rate of youngs in Japan. About actual internet use by Japanese youngs, we find the date published by Cabinet Office Government of Japan in 2022. In this date, 98.5% of the youngs people response they are using the internet. The most used device is smartphone. And there’s a high rate, you can see the graph in the right, there is a high rate of internet use starting in the elementary school. Yes, and my daughter is In Japan, they also have a smartphone. Okay. In the digital age, ensure the safety of the children as online is a parliament’s concern. In Japan, constant efforts are underway to address this issue. For the government side, the Ministry of Internal Affairs and Communications runs a program called information security side for the citizens. They have a key vision to educating the children on safety internet practice. For the NPO initiative, the NPO information security forum co-hosts the internet safety education program with local authorities and organizations, extending the reach of the internet safety education to various communities. These efforts help make the internet safe for our kids, let them enjoy its benefits while protecting them from its dangers. From the technology side, various technologies are also offered. Field dust technology stands as a popular measure in safeguarding children’s internet use. Developed with smartphones application, all set in the network device at the school and homes, and some network service provider also provide the service. Moreover, smartphones parents’ controls help limited use time and accessible application. However, there are big challenges. When you use filter, it’s important to keep the filter applications database always up to date in order to provide the most effective production. Moreover, if you are using a network side filter, it’s very simple to change, switch to another network will disable the filter. For parentless controls, maybe fine even for me. and it’s the set-up of the controls, a smartphone. It’s very complicated. And often, the parents cannot to configure it correctly. And believe me, the kids is very smarter than we imagine. They always can find the way to disable the parent’s controls. More than one, I heard from some young boys, proud to tell me how he removed the restriction on his school’s PC. Okay, next slide. Use of big data and AI technology to protect the students safe use of internet is a new technology trend. Some, the AIST, the National Institute of Research, institute provides real-time AI analysis for children, abuse risk assignment, and support decision-making. Use this system, yes, they can provide the separate of abuse, a potential recurrent threat to help the kids. Okay, the next slide. Oh, this one, it’s okay. Sorry, okay. So at the same time, another research, in our research group is working on a value study about e-learning. So the open-source learning management system, we collect all the activity students while they are interacting with the system. All the click and what they watched, how long they watched some page. All the data collected to utilize, to patternize the students’ learning behaviors. Enables real-time personalized feedback, significantly improving the learning experience. Interestingly, we also developed a method to identify student, why they struggle with learning. Upon the investigation, we discovered that the struggle is all. . So we often not with learning materials, but distractions like online game. Next slide, please. So our research group is working and so we realise that is some support system, the same support system can be help ensure kids use internet safety. So we can use internet safety without need external set-up. It’s more easy to use. But the challenge is many school use learning platform like Google and Microsoft. This platform is very easy to create the learning materials, but even you haven’t IT skill, but don’t need us to connect the detail date and how the students use it. So if we want to enhance internet safety, we need to create the learning materials. So this is why we team up with the big platform providers, very important. In addition, there are many issues related to personal privacy when you state, there’s a trend off between protecting privacy and improving the date available, which will be a big challenge. Okay. That’s all my presentation. Thank you.
Moderator – Shenrui LI:
Thank you, everyone, for joining us today. We have a lot of questions about how to employ something we have already have to inform our practices. Next, please join us to welcome Ms. Wang Mengying, senior director of culture and content division of Tencent to share with us.
Mengyin Wang:
Thank you. Welcome. APPLAUSE I’m Wang Mengying, senior director of culture and content division of Tencent to share with us how to use the emerging technologies to keep children safe online as we are all aware, emerging digital technologies such as AI and large language models and rapid development and enables internet applications to scale and expand substantially, offering children a much richer digital world for learning, living, and engaging with the world. There are nearly 200 million netizens in China, with the internet adoption rate among the population reaching almost 100%. Children now access the internet as younger ages, with an evident rural and urban information gap, as well as a lack of risk awareness going online, given the large number of children under the age of 12. The digital world is changing rapidly and the development is always at the top of the agenda to protect the rest and interest in the digital world. Just now, Professor Sun Yi shared with us his research and thoughts on the children online protection in Japan, which is tremendously enlightening. And now I’m going to offer an industrial perspective on how the digital world is changing. Tencent firmly committed to the mission and vision, which is value for users and tech for good. Actually, we actively explore and improve our online safety solutions for minors, making full use of the company’s experience in information and digital technologies, and also mobilizing resources in societies at large. Tencent is committed to providing high-quality content for the young users, and this is what Tencent is working on at this moment. For status, we bring together quality contents and provide netizens with a sense to use the internet positively. In 2019, Tencent kicked off the T-mode in a handful of its products, consolidating high-quality content, so it’s not just for young users, but also for the general public. Tencent is also working with China’s foundation in the initiative of the master class for young, which top-class scientists, experts, and educators were invited to teach our young audience their lesson one in various areas. Nobel Prize-winning physicist Professor Yang Zhening, Mr. Chief designer of China’s spacecraft, and also Mr. President of the Chinese Academy of Sciences, and also the president of the Chinese Academy of Sciences, and also the president of the Society of Cultural Relics. The master classes were then turned into featured video lectures in 4K resolutions for circulation in the hope that these great materials can truly benefit more children, offer fascinating learning content, and inspire their future professional pursuits. Secondly, Tencent provides professional education and help young people in the digital age. Today’s young people need to keep a finger on the pulse of the emerging technologies so as to prepare for the future. On September 1st this year, Tencent released AI and programming lesson one, a pro bono project offering young users a free introductory course at home on AI and programming through a lightweight package on WeChat, notably for schools in rural areas with low-income children. Despite suffering limited teaching resources at school in March, Tencent’s mission prompted the advocates to adjust Blackboard’s and equipment. Our course can also take place in computer-free mode, allowing students to learn AI as their urban counterparts do, such as role-playing. This program deputed at already 21 palisades in 14 primary schools in around four cities, including Beijing, Shanghai, Shenzhen, and Guangzhou. Most students found it captivating to let the machine identify objects through simple labeling, and many teachers said it was very important to build up children’s creative mindset through such programs, enable them to spot potential questions, and also try to troubleshoot using contributing thinking. Certainly, as an advocate for scientific thinking, Tencent strives to guide minors to understand the internet and their own development in a positive manner. Curiosity of the young mind is very much treasured. They need diversified channels to explore the real world, and the proper education to experience the pervasive world beyond the screens. Starting from 2019, Tencent and Tsinghua University jointly carried out an annual popular science event named Tencent Young Science Fire. More than 2,000 young scientists and enthusiasts met face-to-face with top international scientists at the fair, and 40 million online views were impressed by the charms of the science. More and more youngsters in China now are taking scientists as new models and new adults, and also scientific explorations is becoming a new fashion. Helping the minors growing up healthily is a vision shared by the international community. In 2022, Tencent teamed up with a number of companies and organizations to compel and release guidelines for construction and internet applications for minors based on AI technology, bringing the synergies of the industry to promote online safety for children while developing digital technologies. Tencent is also exploring AI technology to improve the growing environment for minors. For example, the Israel Action Initiative by Tencent in 2020 offered charities, groups, and equipment manufacturers the Israel audio technology for free by improving user experience for those with cochlear implants, including the children. Children is the future and the hope of the mankind, and minors protection is by all means a common cause, as wonderful and daunting as it is. I’m pleased to share with you that in September, the AI program lesson one was rolled out in primary schools all around the country, so in the seas of AI and in the hearts of the many children in rural areas. The master classes for young now total 139 episodes, already reached 10 million young people with more than 100 million views so far. And at last, Tencent is looking forward to join hands with you all and to build in building a clean internet and a safe digital world for our children. Thank you for all.
Moderator – Shenrui LI:
Okay, thank you. sharing good practices from Tencent. And last but not least, to conclude this session let’s welcome Ms. Dora Giusti, the Chief Child Protection of UNICEF China country office to deliver the closing remarks. Please welcome.
DORA GIUSTI:
Distinguished experts and participants, as we bring this forum to a closure, allow me to thank you for your insightful ideas and also for the participation in this important forum on emerging technologies and child online protection. We live in an era driven by technologies such as artificial intelligence, blockchain, newer technologies that are poised to reshape our society. Globally, a child goes online for the first time every half a second. One in three internet users are children. We’ve heard today how this has positive connotations and impact in terms of learning and accessing information, but we’ve also heard that there are potential risks. Children may be exposed to harms like illegal content, privacy breaches, cyber bullying, and most seriously sexual abuse and exploitation through the use of technology. In 2022, the US-based National Center for Missing and Exploited Children received 32 million reports from around the world of suspected child sexual exploitation and abuse cases, an increase by 9% from 2021. Europol identified that the increase had been going on year by year, but during COVID, due to increased activity related to the lockdowns, this increase, this rise was particularly significant. As today we talked about emerging technologies, we need to consider that the use of immersive digital spaces, which are virtual environments that create a sense of presence or immersion for users and are facilitated by AI, may expose children to environments that are not designed for them, amplifying the risks of sexual grooming and exploitation, for instance, through the use by potential abusers of virtual rooms or personas that groom them. As technology evolves, immersive digital spaces will become more widespread in all fields and therefore the risk will also increase. We need therefore to understand in depth the implications and impact of the risks for children. On a positive note, we’ve heard today how AI technologies can offer help to address child sexual exploitation and abuse online. For instance, there exists an array of techniques based on AI that can be designed to detect different elements of the spectrum of illegal materials, behaviors, and practices linked to child sexual exploitation and abuse online. In addition to identifying preventing abuse, AI can also be used to support children who have experienced abuse, as we saw in Patrick’s presentation. this is positive for prevention, detection, and investigation of cases of child sexual abuse and exploitation online, the use of AI may also impact data protection, safeguards, and users’ privacy. Therefore, protecting child rights in the digital world and ensuring safety relies on striking a balance between the right to protection from harm as well as to privacy. This is one of the guiding principle of the UN Committee on the Rights of the Child, general comment number 25 on children’s rights in relation to the digital environment. This document has provided us with important principles to address the issue of child rights in a rapidly changing technology environment with the objective of preventing risks from becoming harms and to ensure children’s rights to be informed and at the same time become digital citizens. We know much more today than a decade ago. We heard today, echoing also the Secretary General’s words, Patrick’s and all the other speakers, that we need to cooperate. We need to work together. We need to look at different dimensions. We need to coordinate efforts at the legal and policy level, criminal justice, victim support, society and culture, the technology industry, investing in research and data. Before I conclude, allow me to emphasize some key actions to ensure that we have a safe digital environment for children, echoing also the words of the Secretary General and other speakers. First of all, we need to enhance our understanding on child safety within this evolving landscape. Increase evidence generation on trends, patterns and risks for children to be engaged in this evolving digital environment, but also to bring forward solutions that are effective. Secondly, we need to strengthen and develop laws, policies and standards that can evolve as rapidly as the changing environment and that can also assess the critical benefits and risks. We need harmonization of these legislation standards across the globe because this is a global problem and we need to involve experts from different disciplines. Third, we need tech companies to embrace responsible design principles and standards, prioritizing safety, privacy and inclusion of child users and conducting frequently child rights reviews for their products and services. And we’ve heard a few example during this forum. Fourth, we need to continue raising awareness on safety and digital literacy for children, parents, caregivers, society as a whole. We rally for a collective action by governments, private sector, civil society organization, international organization, academia, families and children themselves. Together, we must ensure emerging technologies create a safer, more accessible digital world for children. Thank you very much.
Moderator – Shenrui LI:
Okay, thank you, Dora, for the very comprehensive and encouraging closing remarks. As you mentioned, they’re all essential building blocks for enabling a safe digital environment for all children. And we hope today’s session has brought some enlightening insights to all of you, and thank you for your attention and participation. We are looking forward to seeing you in our session next year at IGF 2024. Okay, thank you all.
Speakers
DORA GIUSTI
Speech speed
136 words per minute
Speech length
898 words
Speech time
397 secs
Arguments
Potential risks to children’s safety in the evolving digital landscape
Supporting facts:
- One in three internet users are children.
- US-based National Center for Missing and Exploited Children received 32 million reports of suspected child sexual exploitation and abuse cases in 2022
Topics: Child Protection, Online Safety, Digital Literacy, Artificial Intelligence, Blockchain, Virtual Environments
Report
The rapidly evolving digital landscape poses potential risks to children’s safety, with statistics showing that one in three internet users are children. This alarming figure highlights the vulnerability of children in the online world. Additionally, the US-based National Center for Missing and Exploited Children reported 32 million cases of suspected child sexual exploitation and abuse in 2022, further emphasizing the urgent need for action.
To protect child rights in the digital realm, there is a pressing need for increased cooperation and multidisciplinary efforts. The emerging risks presented by immersive digital spaces and AI-facilitated environments necessitate a collective approach to address these challenges. The UN Committee on the Rights of the Child has provided principles to guide efforts in safeguarding child rights in the ever-changing digital environment.
By adhering to these principles, stakeholders can ensure the protection of children and the upholding of their rights online. In addition to cooperation and multistakeholder efforts, raising awareness and promoting digital literacy are crucial in creating a safer digital ecosystem for children.
Educating children about the potential risks they may encounter online empowers them to make informed decisions and stay safe. Responsible design principles that prioritize the safety, privacy and inclusion of child users should also be implemented. By adhering to these principles, developers can create platforms and technologies that provide a secure and positive digital experience for children.
The analysis highlights the urgent need for action to address the risks children face in the digital landscape. It underscores the importance of collaboration, guided by the principles set forth by the UN Committee on the Rights of the Child, to protect child rights in the digital world.
Furthermore, it emphasizes the significance of raising awareness, promoting digital literacy, and implementing responsible design principles to ensure the safety and well-being of children online. Integrating these strategies will support the creation of a safer and more inclusive digital environment for children.
Mengyin Wang
Speech speed
153 words per minute
Speech length
1071 words
Speech time
420 secs
Arguments
Using Technology for Minor Safety Online
Supporting facts:
- Tencent’s mission and vision emphasize tech for good and value for users.
- Tencent advocates for high-quality content for minor internet users.
- In 2019, Tencent kicked off the T-mode consolidating high-quality content.
Topics: AI, Digital Learning, Pasitive Content
Promoting Education and Inspiring Learning
Supporting facts:
- Tencent released AI and programming lesson one, offering free introductory course to young users.
- The course also designed for schools with limited teaching resources.
- Tencent and Tsinghua University jointly carried out an annual popular science event named Tencent Young Science Fair.
Topics: AI, programming, Education Equality
Protection and Development of Minors in Digital Age
Supporting facts:
- In 2022, Tencent compiled the guidelines for constructing internet applications for minors based on AI technology.
- Tencent offered the Real Action initiative technology for free to improve user experience for those with cochlear implants, including children.
Topics: Children Safety, Corporate Responsibility, AI
Report
Tencent, a prominent technology company, is leveraging technology to ensure the safety of minors and promote education. With a positive sentiment, Tencent places a strong emphasis on delivering high-quality content and advocating for the well-being of minor internet users. In line with their mission and vision, the company has initiated several key initiatives.
In 2019, Tencent launched the T-mode, a platform that consolidates and promotes high-quality content related to AI, digital learning, and positive content. This initiative aligns with Goal 4 (Quality Education) and Goal 9 (Industry, Innovation, and Infrastructure) of the Sustainable Development Goals (SDGs).
The T-mode platform aims to provide a safe and valuable online experience for minors by curating content that meets strict quality standards. To promote education and inspire learning, Tencent has taken significant steps. They released an AI and programming lesson series, offering a free introductory course to young users.
This initiative aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. The course is designed to cater to schools with limited teaching resources and aims to reduce educational inequalities. Tencent has also partnered with Tsinghua University to organize the Tencent Young Science Fair, an annual popular science event.
This event aims to engage and inspire young minds in science and aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. Through interactive exhibits and demonstrations, the fair encourages the next generation to explore the wonders of science and fosters a love for learning.
In addressing the protection and development of minors in the digital age, Tencent has harnessed the power of AI technology. They compiled guidelines for constructing internet applications specifically designed for minors based on AI technology. This shows Tencent’s commitment to creating safe and age-appropriate digital environments for young users.
Additionally, Tencent offered the Real Action initiative technology for free to improve the user experience, including children with cochlear implants. This initiative aligns with Goal 3 (Good Health and Well-being) and Goal 9 (Industry, Innovation, and Infrastructure) of the SDGs. In conclusion, Tencent’s initiatives in ensuring minor safety online and promoting education demonstrate their commitment to making a positive impact.
Their focus on providing high-quality content, offering free AI and programming lessons, organizing the Tencent Young Science Fair, compiling guidelines for internet applications, and enhancing accessibility for individuals with cochlear implants showcases their dedication to the protection and development of minors in the digital age.
Through these initiatives, Tencent is paving the way for a safer and more inclusive online environment for the younger generation.
Moderator – Shenrui LI
Speech speed
141 words per minute
Speech length
797 words
Speech time
340 secs
Arguments
Li Shenrui emphasises the importance of protecting children online and adjusting policy actions, practices from the government, industry and civil society to better safeguard children.
Supporting facts:
- Two years ago, the unions released a policy guidance 2.0 on AI for children globally.
- Internet Governance Forum 2023 Open Forum No. 15 is focused on Protecting Children Online with Emerging Technologies.
Topics: Protecting Children Online, Emerging Technologies, AI for Children, Internet Governance, UNICEF
China dedicates to be a pioneer in exploring and leading the way towards enabling a safe digital environment for children globally
Supporting facts:
- The Chinese government introduced provisions on cyber protection of children’s personal information
- China has held forums on children’s online protection for consecutive years
Topics: Online Safety, Child Protection, Internet Governance, Technology
Improving children’s digital literacy is crucial
Topics: Education, Digital Literacy
Report
During the discussion on protecting children online, the speakers placed great emphasis on the importance of safeguarding children in the digital space. Li Shenrui, a Child Protection Officer from UNICEF China Council Office, highlighted the need for collective responsibility among various stakeholders, including governments, industries, and civil society, in order to effectively protect children from online harms.
Shenrui stressed that it is not enough to rely solely on policies; education and awareness are also crucial elements in ensuring children’s safety online. China is dedicated to leading the way in creating a safe digital environment for children globally.
The Chinese government has introduced provisions to protect children’s personal information in the cyberspace. Additionally, the country has organised forums on children’s online protection for consecutive years, demonstrating their commitment to addressing this issue. Xianliang Ren further contributed to the discussion by highlighting the importance of adaptability in laws and regulations for addressing emerging technologies.
Ren recommended regulating these technologies in accordance with the law and suggested that platforms should establish mechanisms such as ‘kid mode’ to protect children from inappropriate content. This highlights the need for clear roles and responsibilities in the digital space.
Improving children’s digital literacy was also identified as a crucial aspect in protecting them online. The importance of education in equipping children with the necessary skills to navigate the digital world effectively was acknowledged. The discussion also highlighted the significance of international cooperation in addressing the issue of children’s online safety.
China has partnered with UNICEF for activities related to children’s online safety, demonstrating their commitment to working together on a global scale to protect children. In conclusion, the discussion on protecting children online emphasised the need for collective responsibility, adaptable laws and regulations, improved digital literacy, and international cooperation.
These recommendations and efforts aim to create a safe and secure digital environment for children, ensuring their well-being in the increasingly connected world.
Patrick Burton
Speech speed
167 words per minute
Speech length
2464 words
Speech time
884 secs
Arguments
Emerging technologies offer multiple opportunities for child online protection, yet also introduce new risks
Supporting facts:
- Examples of such technologies include BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and redirection programs with machine learning for potential offenders
- These technologies are valuable but also can lead to issues like potential privacy and security risks, risks to children’s autonomy of consent, and lack of accountability, transparency, and explainability
Topics: Artificial Intelligence, Emerging Technologies, Child Online Protection
Age verification systems, if adopted, should not exacerbate existing biases or introduce new ones
Supporting facts:
- Before implementing age verification, significant steps need to be taken to make sure no populations are excluded
- Recent legislation points towards the adoption of age verification
Topics: Age Verification Systems, Bias, Online Safety
Report
Emerging technologies offer both opportunities and risks for child online protection. These technologies, such as BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and machine learning-based redirection programs for potential offenders, have proved valuable in combating online child exploitation.
However, their implementation also raises concerns about privacy and security. Potential risks include threats to children’s autonomy of consent and the lack of accountability, transparency, and explainability. To address these concerns, it is crucial to prioritize the collective rights of children in the design, regulation, and legislation of these technologies.
Any policies or regulations should ensure the protection and promotion of children’s rights. States have a responsibility to enforce these principles and ensure that businesses comply. This approach aims to create a safe online environment for children while harnessing the benefits of emerging technologies.
The implementation of age verification systems also requires careful consideration. While age verification can play a role in protecting children online, it is essential to ensure that no populations are excluded from accessing online services due to these systems. Legislation should prevent the exacerbation of existing biases or the introduction of new ones.
Recent trends indicate an increasing inclination towards the adoption of age verification systems, but fairness and inclusivity should guide their implementation. Additionally, it is important to question whether certain technologies, particularly AI, should be built at all. Relying solely on AI to solve problems often perpetuated by AI itself raises concerns.
The potential consequences and limitations of AI in addressing these issues must be carefully assessed. While AI can offer valuable solutions, alternative approaches may be more effective in some situations. In summary, emerging technologies present both opportunities and challenges for child online protection.
Prioritizing the collective rights of children through thoughtful design, regulation, and legislation is crucial to leverage the benefits of technology while mitigating risks. Age verification systems should be implemented in a way that considers biases and ensures inclusivity. Moreover, a critical evaluation of whether certain technologies should be developed is necessary to effectively address the issues at hand.
Sun Yi
Speech speed
147 words per minute
Speech length
904 words
Speech time
369 secs
Arguments
Concerns and initiatives related to online safety for children in Japan
Supporting facts:
- 98.5% of the youngs people in Japan are using the internet with a high rate starting in elementary school
- Ministry of Internal Affairs and Communications runs an information security program for educating children on safe internet practices
- NPO Information Security Forum co-hosts internet safety education with local authorities
Topics: Online Safety, Children
Challenges associated with current online safety measures
Supporting facts:
- Need to keep filter application databases up-to-date
- The ability of children to find ways to disable parental controls
- Challenges with properly configuring parental controls
Topics: Internet Use, Children, Online Safety
Potential of AI and big data in ensuring online safety
Supporting facts:
- AIST provides real-time AI analysis for children abuse risk assessment
- The use of collected student activity data in understanding learning behaviors and distractions
Topics: AI, Big Data, Internet Safety
Report
The discussion revolves around concerns and initiatives related to online safety for children in Japan. It is noted that a staggering 98.5% of young people in Japan use the internet, with a high rate of usage starting as early as elementary school.
In response, the Ministry of Internal Affairs and Communications has implemented an information security program aimed at educating children on safe internet practices. The program addresses the increasing need for online safety and provides children with the necessary knowledge and skills to navigate the online world securely.
Additionally, the NPO Information Security Forum plays a crucial role in co-hosting internet safety education initiatives with local authorities. These collaborative efforts highlight the significance placed on educating children about online safety and promoting responsible internet usage. However, the discussions also highlight challenges associated with current online safety measures in Japan.
Specifically, concerns arise regarding the need to keep filter application databases up-to-date to effectively protect children from harmful content. Moreover, the ability of children to disable parental controls poses a significant challenge in ensuring their online safety. Efforts must be made to address these issues and develop robust safety measures that effectively protect children from potential online threats.
On a positive note, there is recognition of the potential of artificial intelligence (AI) and big data in ensuring online safety for children. The National Institute of Advanced Industrial Science and Technology (AIST) provides real-time AI analysis for assessing the risk of child abuse.
This highlights the use of advanced technology in identifying and preventing potential dangers that children may encounter online. Furthermore, discussions highlight the use of collected student activity data to understand learning behaviors and identify potential distractions. This demonstrates how big data can be leveraged to create a safer online environment for children by identifying and mitigating potential risks and challenges related to online learning platforms.
To create supportive systems and enhance online safety efforts, collaboration with large platform providers is essential. However, challenges exist in collecting detailed data on student use, particularly on major e-learning platforms such as Google and Microsoft. Addressing these challenges is crucial to developing effective strategies and implementing measures to ensure the safety of children using these platforms.
In summary, the discussions on online safety for children in Japan emphasize the importance of addressing concerns and implementing initiatives to protect children in the digital space. Progress has been made through information security programs and collaborative efforts, but challenges remain in keeping filter applications up-to-date, configuring parental controls, and collecting detailed data from major e-learning platforms.
The potential of AI and big data in enhancing online safety is recognized, and future collaborations with platform providers are necessary to create safer online environments for children.
Xianliang Ren
Speech speed
82 words per minute
Speech length
864 words
Speech time
635 secs
Arguments
There is a global consensus that we need to strengthen online protection for children.
Supporting facts:
- Studies show that in China alone, there are almost 200 million minors who have access to the Internet.
- 52% of minors start using the Internet before the age of 10.
Topics: Children’s Online Safety, Internet Governance
The Chinese government has introduced provision on the cyber protection of personal information of children
Supporting facts:
- The government has set up special rules and user agreement for the protection.
- Interim manners have been put in place for the administration of generative artificial intelligence services.
Topics: Children’s privacy, Government regulations
It’s crucial to improve children’s digital literacy.
Supporting facts:
- Schools, families, and society as a whole should work together to raise awareness and educate minors about the internet.
- Equipped with knowledge and skills to recognize risk and protect themselves.
Topics: Education, Children’s Online Safety
Report
There is a global consensus on the need to strengthen online protection for children. Studies have revealed that in China alone, there are almost 200 million minors who have access to the internet, and 52% of minors start using it before the age of 10.
This highlights the importance of safeguarding children’s online experiences and ensuring their safety in the digital world. In response to this concern, the Chinese government has introduced provisions for the cyber protection of children’s personal information. Special rules and user agreements have been put in place, and interim measures have been implemented for the administration of generative artificial intelligence services.
These efforts are aimed at protecting the privacy and security of children when they engage with various online platforms and services. There is a growing belief that platforms should take social responsibility for protecting children online. It is suggested that they should implement features like kid mode, which can help create a safer online environment for young users.
By providing child-friendly settings and content filters, platforms can mitigate potential risks and ensure age-appropriate online experiences for children. Additionally, it is argued that the development and regulation of science and technologies should be done in accordance with the law.
This calls for ethical considerations and responsible practices within the industry. By adhering to regulations, technological innovations can be harnessed for the greater good while avoiding potential harm or misuse. Improving children’s digital literacy through education and awareness is seen as crucial in tackling online risks.
Schools, families, and society as a whole need to work together to raise awareness among minors about the internet and equip them with the knowledge and skills to recognize risks and protect themselves. This can be achieved by integrating digital literacy education into school curricula and empowering parents and caregivers to guide children’s online experiences.
Furthermore, it is important for the internet community to strengthen dialogue and cooperation based on mutual respect and trust. By fostering a collaborative approach, stakeholders can work together to address the challenges of online protection for children. This includes engaging in constructive discussions, sharing best practices, and developing collective strategies to create a safer digital environment for children.
In conclusion, there is a consensus that online protection for children needs to be strengthened. The Chinese government has introduced provisions for the cyber protection of children’s personal information, and there is a call for platforms to implement features like kid mode and take social responsibility.
It is crucial to develop and regulate science and technologies in accordance with the law, improve children’s digital literacy through education, and promote dialogue and cooperation within the internet community. By taking these steps, we can create a safer and more secure online environment for children worldwide.
ZENGRUI LI
Speech speed
97 words per minute
Speech length
606 words
Speech time
375 secs
Arguments
Integration of disciplinary construction related to Internet technology, technological progress, and social responsibility
Supporting facts:
- Communication University of China has set up AI as a major
- A number of emerging technology-related research centers have been established
Topics: Internet Technology, Social Responsibility
Report
The Communication University of China (CUC) has made a significant move by incorporating Artificial Intelligence (AI) as a major, recognizing the transformative potential of this emerging technology. This integration showcases the university’s commitment to preparing students for the future and aligns with the United Nations’ Sustainable Development Goals (SDGs) of Quality Education and Industry, Innovation, and Infrastructure.
In addition to integrating AI into its programs, CUC has also established research centers focused on exploring and advancing emerging technologies. This demonstrates the university’s dedication to technological progress and interdisciplinary construction related to Internet technology. CUC has also recognized the importance of protecting children online and the need for guidelines to safeguard their well-being in the face of emerging technologies.
It is suggested that collaboration among government departments, scientific research institutions, social organizations, and relevant enterprises is crucial in establishing these guidelines. CUC’s scientific research teams have actively participated in the AI for Children project group, playing key roles in formulating guidelines for Internet applications for minors based on AI technology.
The comprehensive integration of AI as a major and the establishment of research centers at CUC reflect the university’s commitment to technological advancement. It highlights the importance of recognizing both the benefits and risks of emerging technologies and equipping students with the necessary skills and knowledge to navigate the digital landscape responsibly.
Overall, CUC’s initiative to integrate AI as a major and its involvement in protecting children online demonstrate a proactive approach towards technology, education, and social responsibility. The university’s collaboration with various stakeholders signifies the importance of interdisciplinary cooperation in addressing complex challenges in the digital age.