Protecting children online with emerging technologies | IGF 2023 Open Forum #15

10 Oct 2023 09:45h - 10:45h UTC

Event report

Speakers and Moderators

Speakers:
  • Hui Zhao, China Federation of Internet Societies, Civil Society, Asia-Pacific Group
  • Dora Giusti, UNICEF China, Intergovernmental Organization
  • Eleonore Pauwels, UNICEF, Intergovernmental Organization (online)
  • André F. Gygax, the University of Melbourne, Civil Society (online)
  • Zhu Xiong, Tencent, Private sector, Asia-Pacific Group
  • Tetsushi Kawasaki, Gifu University, Civil Society, Asia-Pacific Group
  • Pineros Carolina, Red PaPaz, Civil Society, Latin American and Caribbean Group
  • Nkoro Ebuka, Kumoh national institute of technology of Korea, Civil Society, Asia-Pacific Group
Moderators:
  • Rui Li, UNICEF China
  • Xiuyun Ding, China Federation of Internet Societies

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Moderator – Shenrui LI

During the discussion on protecting children online, the speakers placed great emphasis on the importance of safeguarding children in the digital space. Li Shenrui, a Child Protection Officer from UNICEF China Council Office, highlighted the need for collective responsibility among various stakeholders, including governments, industries, and civil society, in order to effectively protect children from online harms. Shenrui stressed that it is not enough to rely solely on policies; education and awareness are also crucial elements in ensuring children’s safety online.

China is dedicated to leading the way in creating a safe digital environment for children globally. The Chinese government has introduced provisions to protect children’s personal information in the cyberspace. Additionally, the country has organised forums on children’s online protection for consecutive years, demonstrating their commitment to addressing this issue.

Xianliang Ren further contributed to the discussion by highlighting the importance of adaptability in laws and regulations for addressing emerging technologies. Ren recommended regulating these technologies in accordance with the law and suggested that platforms should establish mechanisms such as ‘kid mode’ to protect children from inappropriate content. This highlights the need for clear roles and responsibilities in the digital space.

Improving children’s digital literacy was also identified as a crucial aspect in protecting them online. The importance of education in equipping children with the necessary skills to navigate the digital world effectively was acknowledged.

The discussion also highlighted the significance of international cooperation in addressing the issue of children’s online safety. China has partnered with UNICEF for activities related to children’s online safety, demonstrating their commitment to working together on a global scale to protect children.

In conclusion, the discussion on protecting children online emphasised the need for collective responsibility, adaptable laws and regulations, improved digital literacy, and international cooperation. These recommendations and efforts aim to create a safe and secure digital environment for children, ensuring their well-being in the increasingly connected world.

Patrick Burton

Emerging technologies offer both opportunities and risks for child online protection. These technologies, such as BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and machine learning-based redirection programs for potential offenders, have proved valuable in combating online child exploitation. However, their implementation also raises concerns about privacy and security. Potential risks include threats to children’s autonomy of consent and the lack of accountability, transparency, and explainability.

To address these concerns, it is crucial to prioritize the collective rights of children in the design, regulation, and legislation of these technologies. Any policies or regulations should ensure the protection and promotion of children’s rights. States have a responsibility to enforce these principles and ensure that businesses comply. This approach aims to create a safe online environment for children while harnessing the benefits of emerging technologies.

The implementation of age verification systems also requires careful consideration. While age verification can play a role in protecting children online, it is essential to ensure that no populations are excluded from accessing online services due to these systems. Legislation should prevent the exacerbation of existing biases or the introduction of new ones. Recent trends indicate an increasing inclination towards the adoption of age verification systems, but fairness and inclusivity should guide their implementation.

Additionally, it is important to question whether certain technologies, particularly AI, should be built at all. Relying solely on AI to solve problems often perpetuated by AI itself raises concerns. The potential consequences and limitations of AI in addressing these issues must be carefully assessed. While AI can offer valuable solutions, alternative approaches may be more effective in some situations.

In summary, emerging technologies present both opportunities and challenges for child online protection. Prioritizing the collective rights of children through thoughtful design, regulation, and legislation is crucial to leverage the benefits of technology while mitigating risks. Age verification systems should be implemented in a way that considers biases and ensures inclusivity. Moreover, a critical evaluation of whether certain technologies should be developed is necessary to effectively address the issues at hand.

Xianliang Ren

There is a global consensus on the need to strengthen online protection for children. Studies have revealed that in China alone, there are almost 200 million minors who have access to the internet, and 52% of minors start using it before the age of 10. This highlights the importance of safeguarding children’s online experiences and ensuring their safety in the digital world.

In response to this concern, the Chinese government has introduced provisions for the cyber protection of children’s personal information. Special rules and user agreements have been put in place, and interim measures have been implemented for the administration of generative artificial intelligence services. These efforts are aimed at protecting the privacy and security of children when they engage with various online platforms and services.

There is a growing belief that platforms should take social responsibility for protecting children online. It is suggested that they should implement features like kid mode, which can help create a safer online environment for young users. By providing child-friendly settings and content filters, platforms can mitigate potential risks and ensure age-appropriate online experiences for children.

Additionally, it is argued that the development and regulation of science and technologies should be done in accordance with the law. This calls for ethical considerations and responsible practices within the industry. By adhering to regulations, technological innovations can be harnessed for the greater good while avoiding potential harm or misuse.

Improving children’s digital literacy through education and awareness is seen as crucial in tackling online risks. Schools, families, and society as a whole need to work together to raise awareness among minors about the internet and equip them with the knowledge and skills to recognize risks and protect themselves. This can be achieved by integrating digital literacy education into school curricula and empowering parents and caregivers to guide children’s online experiences.

Furthermore, it is important for the internet community to strengthen dialogue and cooperation based on mutual respect and trust. By fostering a collaborative approach, stakeholders can work together to address the challenges of online protection for children. This includes engaging in constructive discussions, sharing best practices, and developing collective strategies to create a safer digital environment for children.

In conclusion, there is a consensus that online protection for children needs to be strengthened. The Chinese government has introduced provisions for the cyber protection of children’s personal information, and there is a call for platforms to implement features like kid mode and take social responsibility. It is crucial to develop and regulate science and technologies in accordance with the law, improve children’s digital literacy through education, and promote dialogue and cooperation within the internet community. By taking these steps, we can create a safer and more secure online environment for children worldwide.

Mengyin Wang

Tencent, a prominent technology company, is leveraging technology to ensure the safety of minors and promote education. With a positive sentiment, Tencent places a strong emphasis on delivering high-quality content and advocating for the well-being of minor internet users. In line with their mission and vision, the company has initiated several key initiatives.

In 2019, Tencent launched the T-mode, a platform that consolidates and promotes high-quality content related to AI, digital learning, and positive content. This initiative aligns with Goal 4 (Quality Education) and Goal 9 (Industry, Innovation, and Infrastructure) of the Sustainable Development Goals (SDGs). The T-mode platform aims to provide a safe and valuable online experience for minors by curating content that meets strict quality standards.

To promote education and inspire learning, Tencent has taken significant steps. They released an AI and programming lesson series, offering a free introductory course to young users. This initiative aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. The course is designed to cater to schools with limited teaching resources and aims to reduce educational inequalities.

Tencent has also partnered with Tsinghua University to organize the Tencent Young Science Fair, an annual popular science event. This event aims to engage and inspire young minds in science and aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. Through interactive exhibits and demonstrations, the fair encourages the next generation to explore the wonders of science and fosters a love for learning.

In addressing the protection and development of minors in the digital age, Tencent has harnessed the power of AI technology. They compiled guidelines for constructing internet applications specifically designed for minors based on AI technology. This shows Tencent’s commitment to creating safe and age-appropriate digital environments for young users. Additionally, Tencent offered the Real Action initiative technology for free to improve the user experience, including children with cochlear implants. This initiative aligns with Goal 3 (Good Health and Well-being) and Goal 9 (Industry, Innovation, and Infrastructure) of the SDGs.

In conclusion, Tencent’s initiatives in ensuring minor safety online and promoting education demonstrate their commitment to making a positive impact. Their focus on providing high-quality content, offering free AI and programming lessons, organizing the Tencent Young Science Fair, compiling guidelines for internet applications, and enhancing accessibility for individuals with cochlear implants showcases their dedication to the protection and development of minors in the digital age. Through these initiatives, Tencent is paving the way for a safer and more inclusive online environment for the younger generation.

DORA GIUSTI

The rapidly evolving digital landscape poses potential risks to children’s safety, with statistics showing that one in three internet users are children. This alarming figure highlights the vulnerability of children in the online world. Additionally, the US-based National Center for Missing and Exploited Children reported 32 million cases of suspected child sexual exploitation and abuse in 2022, further emphasizing the urgent need for action.

To protect child rights in the digital realm, there is a pressing need for increased cooperation and multidisciplinary efforts. The emerging risks presented by immersive digital spaces and AI-facilitated environments necessitate a collective approach to address these challenges. The UN Committee on the Rights of the Child has provided principles to guide efforts in safeguarding child rights in the ever-changing digital environment. By adhering to these principles, stakeholders can ensure the protection of children and the upholding of their rights online.

In addition to cooperation and multistakeholder efforts, raising awareness and promoting digital literacy are crucial in creating a safer digital ecosystem for children. Educating children about the potential risks they may encounter online empowers them to make informed decisions and stay safe. Responsible design principles that prioritize the safety, privacy and inclusion of child users should also be implemented. By adhering to these principles, developers can create platforms and technologies that provide a secure and positive digital experience for children.

The analysis highlights the urgent need for action to address the risks children face in the digital landscape. It underscores the importance of collaboration, guided by the principles set forth by the UN Committee on the Rights of the Child, to protect child rights in the digital world. Furthermore, it emphasizes the significance of raising awareness, promoting digital literacy, and implementing responsible design principles to ensure the safety and well-being of children online. Integrating these strategies will support the creation of a safer and more inclusive digital environment for children.

ZENGRUI LI

The Communication University of China (CUC) has made a significant move by incorporating Artificial Intelligence (AI) as a major, recognizing the transformative potential of this emerging technology. This integration showcases the university’s commitment to preparing students for the future and aligns with the United Nations’ Sustainable Development Goals (SDGs) of Quality Education and Industry, Innovation, and Infrastructure.

In addition to integrating AI into its programs, CUC has also established research centers focused on exploring and advancing emerging technologies. This demonstrates the university’s dedication to technological progress and interdisciplinary construction related to Internet technology.

CUC has also recognized the importance of protecting children online and the need for guidelines to safeguard their well-being in the face of emerging technologies. It is suggested that collaboration among government departments, scientific research institutions, social organizations, and relevant enterprises is crucial in establishing these guidelines. CUC’s scientific research teams have actively participated in the AI for Children project group, playing key roles in formulating guidelines for Internet applications for minors based on AI technology.

The comprehensive integration of AI as a major and the establishment of research centers at CUC reflect the university’s commitment to technological advancement. It highlights the importance of recognizing both the benefits and risks of emerging technologies and equipping students with the necessary skills and knowledge to navigate the digital landscape responsibly.

Overall, CUC’s initiative to integrate AI as a major and its involvement in protecting children online demonstrate a proactive approach towards technology, education, and social responsibility. The university’s collaboration with various stakeholders signifies the importance of interdisciplinary cooperation in addressing complex challenges in the digital age.

Sun Yi

The discussion revolves around concerns and initiatives related to online safety for children in Japan. It is noted that a staggering 98.5% of young people in Japan use the internet, with a high rate of usage starting as early as elementary school. In response, the Ministry of Internal Affairs and Communications has implemented an information security program aimed at educating children on safe internet practices. The program addresses the increasing need for online safety and provides children with the necessary knowledge and skills to navigate the online world securely.

Additionally, the NPO Information Security Forum plays a crucial role in co-hosting internet safety education initiatives with local authorities. These collaborative efforts highlight the significance placed on educating children about online safety and promoting responsible internet usage.

However, the discussions also highlight challenges associated with current online safety measures in Japan. Specifically, concerns arise regarding the need to keep filter application databases up-to-date to effectively protect children from harmful content. Moreover, the ability of children to disable parental controls poses a significant challenge in ensuring their online safety. Efforts must be made to address these issues and develop robust safety measures that effectively protect children from potential online threats.

On a positive note, there is recognition of the potential of artificial intelligence (AI) and big data in ensuring online safety for children. The National Institute of Advanced Industrial Science and Technology (AIST) provides real-time AI analysis for assessing the risk of child abuse. This highlights the use of advanced technology in identifying and preventing potential dangers that children may encounter online.

Furthermore, discussions highlight the use of collected student activity data to understand learning behaviors and identify potential distractions. This demonstrates how big data can be leveraged to create a safer online environment for children by identifying and mitigating potential risks and challenges related to online learning platforms.

To create supportive systems and enhance online safety efforts, collaboration with large platform providers is essential. However, challenges exist in collecting detailed data on student use, particularly on major e-learning platforms such as Google and Microsoft. Addressing these challenges is crucial to developing effective strategies and implementing measures to ensure the safety of children using these platforms.

In summary, the discussions on online safety for children in Japan emphasize the importance of addressing concerns and implementing initiatives to protect children in the digital space. Progress has been made through information security programs and collaborative efforts, but challenges remain in keeping filter applications up-to-date, configuring parental controls, and collecting detailed data from major e-learning platforms. The potential of AI and big data in enhancing online safety is recognized, and future collaborations with platform providers are necessary to create safer online environments for children.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more