Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake

10 Jul 2025 09:00h - 09:45h

Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake

Session at a glance

Summary

This workshop focused on the critical issue of misinformation and disinformation’s impact on children’s development, bringing together speakers from multiple countries to discuss AI-driven threats and protective strategies. Professor Salma Abbasi opened by presenting research on how digital platforms affect children’s physiological, psychological, and social development, describing the situation as a “hidden public health crisis.” She highlighted alarming statistics showing that 96% of cases studied across 14 countries involved either self-harm or violence against others, linking excessive screen time to disrupted brain development and dangerous behavioral changes.


Minister Dr. Aminata Zerbo from Burkina Faso emphasized how geopolitical instability has made the digital environment a vector of serious risks, particularly for vulnerable children exposed to manipulative content and toxic online influences. She outlined her government’s initiatives including awareness campaigns in schools and developing AI regulations that prioritize human dignity and social cohesion. Indonesia’s Director-General Fifi Aleyda Yahya shared their comprehensive approach, including delaying social media access until age 17-18 and establishing ethics guidelines for AI development, while stressing the need for human-centric AI solutions.


Young researcher Sami Galal presented findings on how screen exposure negatively impacts specific brain regions in children aged 0-5, including the prefrontal cortex, hippocampus, and visual cortex, leading to problems with emotional regulation, learning, and social interaction. Other panelists discussed grassroots initiatives in Bangladesh, policy frameworks across Africa, and technical standards development through organizations like IEEE. The discussion concluded with calls for collective action involving parents, governments, and private sector companies to establish duty of care, improve digital literacy, and ensure age-appropriate content design to protect children in the digital age.


Keypoints

## Major Discussion Points:


– **Impact of digital platforms on children’s development**: The discussion extensively covered how excessive screen time and social media exposure negatively affects children’s physiological, psychological, and social development, including brain development disruption, addiction patterns, and behavioral changes.


– **AI-driven misinformation and disinformation threats**: Participants addressed the growing sophistication of AI-generated false content, deepfakes, and manipulated media, particularly highlighting how these threaten democratic processes, public trust, and vulnerable populations including children.


– **Technology-facilitated gender-based violence and toxic online culture**: The conversation examined how digital platforms normalize violence, misogyny, and hate speech, contributing to real-world harm and creating what speakers termed a “public health crisis.”


– **Global policy responses and regulatory frameworks**: Representatives from various countries (Indonesia, Burkina Faso, Bangladesh, Colombia) shared their national approaches to combating digital harms, including age verification systems, AI ethics guidelines, and awareness campaigns.


– **Multi-stakeholder collaboration and standards development**: The discussion emphasized the need for coordinated action between governments, tech companies, educators, parents, and international organizations, including the development of age-appropriate design standards and simplified terms of service for children.


## Overall Purpose:


The discussion aimed to address the growing crisis of misinformation, disinformation, and digital harm affecting children globally. The workshop sought to bring together diverse international stakeholders to share experiences, present research findings, and develop collaborative solutions for protecting children in digital spaces while building resilience against online threats.


## Overall Tone:


The discussion maintained a consistently serious and urgent tone throughout, with speakers treating the topic as a critical public health crisis requiring immediate action. The tone was collaborative and solution-oriented, with participants sharing both concerning research findings and practical interventions. While the subject matter was grave, the atmosphere remained constructive and forward-looking, emphasizing hope through collective action and youth empowerment. The inclusion of a young researcher (Sami) added an authentic voice that reinforced the urgency while demonstrating that young people can be part of the solution.


Speakers

**Speakers from the provided list:**


– **Salma Abbasi** – Professor, eWorldwide Group, moderator and organizer of the workshop on misinformation and disinformation


– **Aminata Zerbo** – Honorable Minister, Dr., from Burkina Faso


– **Fifi Aleyda Yahya** – Director-General, Ministry of Communication and Digital of the Republic of Indonesia


– **Sami Galal** – Young participant/student who researched the impact of screen time on brain development


– **AHM Bazlur Rahman** – Representative from Bangladesh News Network for Radio and Communication, working on youth resilience to misinformation and technology-facilitated gender-based violence


– **Elise Elena Mola** – Lawyer specializing in EU AI Act implementation and AI governance for corporations


– **Yu Ping Chan** – Head of Partnership, UNDP (United Nations Development Programme), leads digital engagements and partnerships


– **Mactar Seck** – Dr., Chief of Technology Digital Transformation, AI, and Cyber, working on African content and continental AI strategy


– **IEEE representative** – Karen Mulberry (substituting for Karen McCade), representing IEEE (Institute of Electrical and Electronics Engineers), working on technical standards for age-appropriate design


– **Gitanjali Sah** – ITU representative, involved in organizing the WSIS Forum


– **Audience** – Various audience members who asked questions during the session


**Additional speakers:**


– **Claudia Bustamante** – Director of CRC (regulator in Colombia)


– **Dr. Eva Fell** – International Committee of the Red Cross


– **Carol Constantine** – Human resources technology company representative


Full session report

# International Workshop Report: Misinformation, Disinformation, and Children’s Digital Safety


## Executive Summary


This international workshop, moderated by Professor Salma Abbasi of eWorldwide Group, brought together government ministers, international organisation representatives, technical standards experts, and youth researchers to address digital threats to children’s safety and development. The discussion focused on the intersection of artificial intelligence, misinformation, and child protection in digital environments, with speakers characterising the current situation as a “hidden public health crisis” requiring urgent international action.


## Opening Presentation: The Hidden Public Health Crisis


Professor Salma Abbasi opened the workshop by presenting research findings from her work across 14 countries, revealing that 96% of cases studied resulted in either self-harm or violence. She framed this as a “hidden public health crisis” affecting children’s development in three key areas: physiological, psychological, and social development.


Abbasi explained how digital platforms exploit children’s psychological vulnerabilities through dopamine-driven algorithms designed to maximise screen time rather than promote healthy development. She referenced the UK summer polarisation incident as an example of how online misinformation can lead to real-world violence, emphasising how exposure to toxic content normalises violence and creates harmful attitudes.


The presentation highlighted how technology-facilitated gender-based violence extends beyond individual harm to threaten social cohesion and democratic values across societies, establishing the framework for the subsequent panel discussion.


## Government Perspectives


### Burkina Faso’s National Response


Dr. Aminata Zerbo, Honourable Minister from Burkina Faso, addressed how geopolitical instability and security challenges compound digital threats to children. She explained that in contexts of political uncertainty, the digital environment becomes particularly dangerous for manipulative content targeting vulnerable populations.


Minister Zerbo outlined her government’s response, including awareness-raising campaigns in schools and development of legal frameworks for artificial intelligence that prioritise human dignity and social cohesion. She emphasised the importance of international cooperation whilst maintaining that solutions must be adapted to specific national contexts, calling for collective responses that respect national sovereignty.


### Indonesia’s Age-Based Restrictions


Fifi Aleyda Yahya, Director-General from Indonesia, shared her country’s approach to protecting children in digital spaces. Indonesia has implemented social media access restrictions until ages 17-18, whilst maintaining access to digital devices for educational purposes under supervision.


Yahya explained that this policy recognises the importance of digital literacy whilst acknowledging that social media platforms pose specific developmental risks. She noted that as AI-generated content becomes more sophisticated and seamless, detection methods must evolve accordingly, though current AI-generated content can often still be identified.


## Panel Discussion Responses


### Youth Research on Brain Development


Sami Galal, a young researcher studying screen time impacts on brain development, focused his research on children aged 0-5, examining effects on the prefrontal cortex, hippocampus, and visual cortex. His findings indicated that excessive screen time leads to problems with emotional regulation, learning difficulties, and vision issues.


Galal explained how screen-based activities require less effort than physical activities whilst providing similar dopamine rewards, creating patterns that interfere with healthy development. He advocated for interactive terms and conditions that children can understand, ideally written by children for children, and emphasised that parents should view digital devices as last resorts rather than convenient solutions.


### Technical Standards Development


Karen Mulberry, substituting for Karen McCade from the Institute of Electrical and Electronics Engineers (IEEE), outlined ongoing work to develop technical standards for age-appropriate design. The IEEE is developing standards for age verification, e-gaming, and general age-appropriate design principles across different age groups.


She mentioned current draft guidelines for children’s safe engagement on social media and gaming platforms covering 36 countries, with plans for expansion through working group collaboration. These standards aim to bridge the gap between policy objectives and technical implementation.


### Bangladesh Grassroots Initiatives


AHM Bazlur Rahman from Bangladesh News Network for Radio and Communication described grassroots-level interventions focused on hyperlocal Facebook page development and social media training at community levels. His approach recognises that misinformation and disinformation pollute entire information ecosystems, threatening human progress by promoting violent extremism.


Rahman’s methodology operates at community levels where social media consumption actually occurs, demonstrating how locally relevant solutions can address global problems through community-based approaches.


### African Continental Strategy


Dr. Mactar Seck, Chief of Technology Digital Transformation, AI, and Cyber, noted that over 400 million people connected to social media across Africa face challenges from misinformation promoting violence, terrorism, and gender-based violence. He outlined efforts to develop continental AI strategy frameworks that incorporate considerations of misinformation and gender violence.


Seck mentioned development of a disinformation monitoring platform in collaboration with Gambia as a first step towards continental-level solutions that can be adapted and scaled across different African contexts.


### EU Regulatory Analysis


Elise Elena Mola, a lawyer specialising in EU AI Act implementation, provided critical analysis of current regulatory frameworks. She argued that existing EU AI Act requirements focus primarily on corporate efficiency rather than teaching society how to interact safely with AI systems.


Mola highlighted research showing that within 20 minutes of using TikTok, young men are exposed to extreme content, demonstrating how AI algorithms exploit psychological vulnerabilities to maximise engagement. She identified a disconnect between corporate AI governance focused on operational efficiency and societal AI literacy needs.


## International Organisation Responses


### UNDP Multi-Country Support


Yu Ping Chan, Head of Partnership at the United Nations Development Programme (UNDP), outlined international support mechanisms across multiple countries. UNDP supports over 10 countries with tools like eMonitor and iVerify designed to address hate speech and technology-facilitated gender-based violence.


Chan emphasised that global solutions developed in Western contexts may not be appropriate for all situations, particularly in developing countries where different cultural, economic, and technological contexts require adapted approaches.


### ITU Integration


Gitanjali Sah, representing the International Telecommunication Union (ITU) and involved in organising the WSIS Forum, addressed how workshop outcomes could be integrated into broader international policy processes. She emphasised translating discussion outcomes into actionable policies at international forums like the UN General Assembly.


Sah raised questions about biases in AI algorithms, particularly regarding gender discrimination and impacts on vulnerable populations including children and older persons, connecting digital threats to broader human rights concerns.


## Audience Questions and Discussion


### Educational Institution Roles


Dr. Eva Fell from the International Committee of the Red Cross emphasised schools as primary venues for teaching critical thinking skills about digital content and misinformation. The discussion highlighted peer-to-peer communication between older and younger students as particularly effective.


### Caregiver Training


Claudia Bustamante, Director of CRC (regulator in Colombia), raised the need to train caregivers beyond parents, including kindergarten staff and household helpers who spend significant time with children. This broader caregiver approach recognises that children’s digital safety depends on multiple adults who may lack digital literacy training.


### Implementation Mechanisms


Carol Constantine’s question about implementation led to discussion of multi-ministry collaboration involving Health, Education, ICT, and Justice departments, reflecting recognition that digital threats require coordinated government responses across traditional departmental boundaries.


## Proposed Solutions and Next Steps


### Technical Implementation


The workshop identified concrete technical initiatives including expansion of IEEE standards for age-appropriate design beyond the current 36 countries, and development of interactive terms and conditions that children can understand.


### Educational Initiatives


Speakers proposed comprehensive digital parenting education programmes and peer-to-peer communication initiatives in schools, addressing the reality that many parents lack digital literacy skills needed to guide their children effectively.


### Policy Integration


Gitanjali Sah proposed submitting workshop outcomes to the UN General Assembly through the chair’s report, providing a mechanism for integrating results into broader international policy processes.


## Closing Remarks


### Multi-Stakeholder Responsibility


Sami Galal concluded by identifying parents, governments, and private sector companies as different lines of defence, each with distinct moral responsibilities. He positioned corporate responsibility as an ethical imperative extending beyond regulatory compliance.


### International Cooperation


The workshop concluded with emphasis on the need for sustained international cooperation that balances coordinated global action with respect for national sovereignty and cultural differences. Speakers consistently emphasised that no single actor can address digital threats effectively, requiring coordinated responses across different sectors and levels of governance.


## Conclusion


The workshop successfully established consensus on the severity of digital threats to children whilst identifying practical solutions through multi-stakeholder collaboration. The combination of government policy innovation, technical standards development, youth research, and international cooperation mechanisms created a comprehensive framework for addressing what speakers characterised as an urgent public health crisis requiring immediate, coordinated action to protect children in digital environments.


Session transcript

Salma Abbasi: Good morning, ladies and gentlemen. Thank you very much for coming bright and early at nine o’clock. It’s always difficult to have a session on the first thing in the morning after a big party. My name is Professor Salma Abbasi, and I’m really delighted to be talking about misinformation and disinformation. This is an important topic that’s really close to everybody’s hearts, whether you’re old or young. This is information that the world is dealing with today. We have organized today’s workshop to be interactive, and we will be touching on this important issue in the context of its impact on children, their development, and we have brought together a very diverse range of speakers from different countries to give different views in different contexts. The geopolitical environment and the rapid adoption of technology and AI in the technology and in our daily lives is causing quite a lot of issues. We at the eWorldwide Group are continuously researching children’s lived experiences and trying to understand how we can build a resilience in them and between them. so that this world that they’re immersed in is well understood. In that context, I would like to share with you a report that we have recently, we’ll be publishing today, which is talking about the platform’s effect on children’s holistic development. And there’s three key areas that I’d like to discuss, the physiological, psychological, and social development. Drawing on the latest global research, and the risks that we see are immense. The psychological development, digital engagement, how it’s altering their biological development, neurological development, and excessive screen time’s exposure, and the dopamine-driven algorithms that disrupts their brain’s development and thinking, reinforces instant gratification, which causes problems. And the research linked to screen time, the overuse, the poor sleep. These are things that we hear about, but the hidden developmental issues are not covered. The psychological impact, discussing with platforms themselves, is a big challenge. I’m sure you can understand we’re talking about trillions of dollars and how this impacts them. And in this context, the whole design of the exploitation of children’s psychological well-being to stimulate their vulnerabilities and exploit their vulnerabilities, and stimulate them to join various groups, is a very big risk for us in society today. We are dealing with the whole issue of self-validation of young people on how many likes they have, how many followers they have. So all the superficial behavior are driving them to do more and more dangerous things. And I believe that this is an area that is highly sensitive for young children as they grow through the adolescent time to see what they should be and how they should be behaving in society. They are most vulnerable to be manipulated and molded in certain directions. They seek their validation in the digital world. Last summer in the UK, we experienced a drastic polarization of society because of misinformation and disinformation. The digital platforms are shaping how children respond and react to society and social issues. The exposure to misogyny, the hate speech, and the toxic culture online that social media and gaming is encouraging is a big issue. It’s normalizing violence. It’s normalizing attitudes towards gender. In fact, I would argue we’re going backwards. It’s increasing the technology-facilitated gender-based violence, which my colleague will discuss further. But the images and the growing around these violent toxic images of abuse is resulting in kids having violent stabbing and shooting incidents increasing around the world as they blend their online world with the offline world of reality, which we are unable to see. I am calling this a public health crisis and a hidden public health crisis. And the synthesis of all of the cases that we have covered over 14 countries show an alarming significance of self-harm and suicide. And it’s 96%, which is a very big problem of those cases that exist are either killing somebody or killing yourself. And that is something that calls for action for all of us, which I’m hoping after the discussion, we will be able to get your support to join hands with us in this important work. So thank you so much for coming and I would like to now hand over to our Honorable Minister, Dr. Aminata Zerbo from Burkina Faso to give the opening remarks. Thank you, Minister.


Aminata Zerbo: Good morning, everyone. It is my great pleasure to be here to talk about this important topic. Thanks to Salma and the World Wide for all the work they do to try to protect our children online. In the age of artificial intelligence, the ability to manipulate images, sounds, facts, and emotions through digital technologies is no longer science fiction. It presents a major challenge to our societies as it weakens truth, destabilizes institutions, fools hate speech, and spreads falsified content such as deep fakes. In this context of geopolitical instability, the digital environment has become not only a strategic sphere of influence but also a vector of serious risk, especially for the most vulnerable. Children and young people increasingly exposed to manipulative content, toxic online games, and deviant influencers are seeing the perception of reality, the behavior, and the moral development deeply affected. We observe with concern the emergence of a digital culture marked by misogyny, sexual exploitation, technology-facilitated gender-based violence, but also fascination with criminality and radicalization of minds. The impact on mental health is just as troubling. This phenomenon seriously threatens the future of our children. In Burkina Faso, these challenges take on a particular significance. The security context in which we operate makes the fight against disinformation, hate speech, and algorithmic manipulation more essential. Fully aware of these challenges, our government, for my ministry, and in close collaboration with all national stakeholders, but also international stakeholders, is implementing structural action to regulate digital practices and create a healthy, inclusive, and secure digital environment. Among our flagship initiatives is the awareness raising of school-stridden and high school students. On the legal front, Burkina Faso is working to develop regulations that frame the ethical use of artificial intelligence and digital technologies in respect of fundamental rights, transparency, accountability, and human dignity. Our goal is to promote human-centered AI, serving social cohesion, education, and peace, not as an instrument of manipulation or exploitation. Also, we are working to strengthen our cooperation with technical and international partners, as worldwide, to establish effective mechanisms for detection and rapid response to disinformation campaigns and toxic content. Distinguished participants, we are convinced that responses to these threats must be collective, united, and adapted to our respective contexts. This is why Burkina Faso reaffirms here its commitment to actively contribute to the creation of a trusted digital space, one that protects youth, guarantees human rights, and fosters peaceful and sustainable digital development. In this global effort, the strengthening of national capacities, regional cooperation, and the harmonization of ethical standards will be our common tools to unmask falsehoods, uphold truth, and build a safer digital future for generations to come. I would like to thank you.


Salma Abbasi: Thank you very much. I hope you can see the gender balance or imbalance in this room. I’d like to now ask the Director-General from Indonesia to kindly give her opening remarks.


Fifi Aleyda Yahya: Good morning, ladies and gentlemen. Professor Salma, thank you for the introduction. Distinguished panelists, ladies and gentlemen, my name is Fifi and I represent the Ministry of Communication and Digital of the Republic of Indonesia. I’m very happy to be here, delighted to meet you all. I would like to congratulate the ITU and the Session Organizing Committee on including AI-driven forgery issues in our collective dialogue today. It is an honor to join this very anticipated event. I know I’m supposed to speak, but I think I speak it’s easier and I think my voice is better if I stand. So I’m sorry if I stand. And in this midst of a remarkable advancement in artificial intelligence, we face an equally pressing challenge, the spread of this information, which is amplified and sometimes generated by AI. In Indonesia, the world’s third largest democracy, well, we can do the math, but it’s 280 million or close to 300 million population, Minister, at this moment. Manipulated content such as deep fakes, bots, and synthetic text has grown in sophistication, posing serious threats to public trust and civic disclosure. To address this, the government of Indonesia has adopted a comprehensive and collaborative approach. First, the regulation, of course. We are strengthening our digital governance framework by enacting the government regulation. This regulation reflects Indonesia’s strong concern for protecting children’s rights, safety, and well-being in the digital space. So, for example, we are delaying the age for our teenagers to be able to access social media. But we’re not banning them to use the gadget, just delaying them 17 to 18 years old. So they can, at that age, after that age, they can access social media independently without parents’ supervision. So that’s one. In addition, we issued a letter, I should say, on ethics of artificial intelligence, which established core principles for responsible, transparent, and human-centered AI development. And we realized that when we talk about digital AI, well, it’s faster there. I think not long after this, they will be more sophisticated. Now, we can still spot whether this is an AI, but I’m sure in no due time, it’s going to be more sophisticated and more seamless, I should say. So, third, through partnership, of course, that’s why we’re here. We collaborate with digital platforms, civil society, and international partners to detect, mitigate, and combat AI-driven disinformation. So, again, a moderator, Professor Salma, we’re very happy to be here. And Indonesia, through the Ministry of Communication and Digital, we are ready to collaborate. We believe AI must be human-centric, resting on a commitment to their use in the service of humanity, we all believe that, I’m sure, and the common good. AI for good, not to become a tool of deception. Therefore, we support the development of AI. of Global Standards that Promote Transparency, Accountability, and Ethics in AI Development. Indonesia, again, is committed to being part of the global solution to break the chain of AI-powered disinformation. Ladies and gentlemen, thank you.


Salma Abbasi: Thank you very much. I think that this is a great opening for us to now have a conversation. At the end of the session, I will definitely share the reports on the hidden public health crisis that talks about these topics that Indonesia is very aware of, and taking these proactive steps to safeguard the young generation. I think, collectively, in the Global South, we have fantastic initiatives that have been spearheaded by countries that we need to understand aren’t taking away our civil rights, but they are protecting the most vulnerable that need to be protected. So, that being said, I’m going to – actually, if we could move to the picture of the panellists – I’m going to introduce our panellists very briefly. Sami from, and I’m just going to say, my youngest participant with a very intellectual-stimulating presentation, and my brother, Bazlur, from Bangladesh, who is going to talk about technology-facilitated gender-based violence and many other things as a historical champion of technology for good. And then, my colleague, Elise, who is a fantastic lawyer, who’s worked really closely with us to develop the guidelines for new standards that kids will read for terms and conditions, and I’m hoping that we can have more than our 36 countries join hands with us. And my brother, Mactar, Dr. Mactar from Africa, he’s the Chief of the Technology Digital Transformation, AI, and Cyber, and has a lot of work leading the African content. And lastly, my colleague, who’s hiding over there, please join the panellists here. I just saw you. We have our colleague from UNDP, Head of Partnership. and then my colleague Karen from the IEEE who’s going to be talking about standards, collaboration, age-appropriate digital content. That being said, we will move to our first speaker and the question I’m going to ask, so we’re doing this in three minutes per speaker. So, Sami, for a school assignment you recently researched the impact of exposure of screen time on the brain. What did you find out and what behavioral impact are you seeing in young children around you? Thank you.


Sami Galal: So my research was made on how screens impact children aged 0 to 5 and what impacts it has on the child’s brain. After that, I interested myself more into the topic and I researched more younger age. So I found out that these screens impact these children really negatively and they do it in various parts of the brain. So starting off with the prefrontal cortex, it’s one of the last places of the brain to mature, so the early years are really important in the child’s overall brain development. It’s also known as the personality center as it revolves with everything related to your personality. So disruption in the area will lead to trouble regulating emotional behavior and trouble regulating, trouble being in social scenarios. They don’t know how to interact with other people. Then we are moving on to the hippocampus. The hippocampus has the task of learning and memory. So it’s what we use mostly in the school work and when we have to put in some work in intellectual work. Disruption in the area will lead to trouble processing information and trouble concentrating in some tasks. Then moving on to the occipital lobe, where we can find the visual cortex. The visual cortex is the most important part of the brain of processing visual information. So disruption in the area will lead to myopia, so a condition where things are close by and not blurry, but the further you get, the blurrier they get. Additionally, dopamine is also a crucial factor. It’s a neurotransmitter and a pleasure hormone. It’s crucial in how children react towards rewards, they form connections, and they learn. Dopamine system has various pathways that lead to various parts of the brain, which can lead to an addiction, as screens stimulate the production of dopamine, the same that you get when you do a physical activity. When you’re on screens, you don’t have to put the same effort as when you’re doing physical activities, which just makes the reward that more enjoyable, as you don’t have to put in the work and you get the same effect. Thank you very much.


Salma Abbasi: Thank you very much. So you can see that there’s some deep science going on here, that we must understand all of these things invisibly happening with our children. And I think there’s a recent study that also talks about the screen time, little children, and to see this become the digital babysitter. But for a young person to have that view and share that, I’m super proud of you. Thank you. My brother next question is for you. Could you please provide some insights on your youth resilience work on misinformation initiative in Bangladesh and how you’re building local engagement and development within Bangladesh at the grassroots? And could you also share some insights on your experience and knowledge on the technology facilitated gender-based violence in Bangladesh and what you’re doing to address?


AHM Bazlur Rahman: Thank you, Madam Moderator. Excellencies, co-fellow and distinguished participants, It’s a great privilege to be with all of you in this morning. I would like to sincerely thank Professor Dr. Salma Abbasi for the kind invitation, which has allowed me to share my thoughts on behalf of Bangladesh News Network for Radio and Communication about the misinformation, disinformation, and as well as combating technology-facilitated gender-based violence. All of us know information integrity refers to accuracy, consistency, and reliability of information. Misinformation and disinformation pollutes the information, entire ecosystem, and threatens human progress. That’s why we are in here. And propaganda, misinformation, and fact news have the potential to polarize public opinion, to promote violent extremism and hate speech, and ultimately to undermine democracies and reduce trust in the democratic process. To address the issue, we have been implementing the project called Youth Resilience to Misinformation, Building Local Engagement and Media Development in Bangladesh. Under the project, basically, we divided our activities into two parts. One is the national level, and another one is the grassroots level. From the local level, I would like to share some activities. We are used to using social media, but with some orientation. That’s why we have chosen the Facebook page development for working with the rural youth. through the hyperlocal method. Social media is used as a platform, Facebook for countering mis- and disinformation. And we identify the youth and youth communities at the grassroots level. We call hyperlocal. What is hyperlocal? Hyperlocal is enabling in line with the supply side and demand side. Of course, youth and youth women are part of this. And also e-engaging of the youth and youth women. And last one is the e-empowering of the youth and youth women. And what is our major intervention? Identifying the social media holder, stakeholder and social media holder analysis, and design and monitoring framework of the some program, engaging social media holder, organize some orientation for strengthening critical analysis skills to counter the proliferation of misinformation, and time to time collecting some more significant cases, and develop some audiovisual content for dissemination through social media, and also develop some message. So this is the major intervention of the misinformation and disinformation, combating through the youth community. Thank you.


Salma Abbasi: Thank you very much indeed. I think we’ll have more to hear. Thank you. I’d like to go to our next speaker, who is Elise Mola. Elise, where do you see the biggest disconnect between the global policy discussion and what actually is happening inside companies?


Elise Elena Mola: Challenging. I’m not sure if I speak for the younger audience but maybe within a decade, maybe internet presentations could be, obviously more of what, and very limited stonewall very challenging. You’ve covered a lot of different things. Well, on a daily basis, I help large companies, mostly in Germany and across Europe, to implement the new EU AI Act and kind of guide them in integrating AI systems into their companies according to proper governance structures. I think the challenge here is, yes, there are requirements, for example, for AI literacy. But what the requirements cover is, you know, how do I use this AI tool to effectively automate certain tasks at corporations to increase efficiency? The kind of AI skills, though, that we really need as a society are the AI skills to interact with AI while understanding how, for example, through social media, it’s actually skewing our perception of reality. There was some interesting studies done, for example, that within 20 minutes of using TikTok, young men are shown extreme right-wing propaganda, violence, misogynist content. And what we’re missing is understanding, you know, how is the AI algorithm manipulating us and playing to our most vulnerable evolutionary aspects in order to maximize our screen time and manipulate us into staying on the platform and creating a kind of, I would say, outrage machine. Because when we’re angry and we’re hateful, we stay on the platform and we’re in this kind of amygdala.


Salma Abbasi: Our next speaker is Ms. Yu Ping Chan, so UNDP has been advocating for an inclusive responsible digital transformation and in that context there’s a growing challenge posed by the use of digital technologies and how is UNDP supporting countries especially in the global south to strengthen their digital resilience and information integrity. Thank you.


Yu Ping Chan: Thank you so much professor, it’s an honour to be here. My name is Ms. Yu Ping Chan, I lead digital engagements and partnerships at the UNDP. We are the development arm of the United Nations, we’re present in 170 countries and territories around the world and our primary role is to support countries through all phases of development and in many cases we are actually the front face of the UN in the country serving as the right hand of government and so this question as to how we strengthen digital resilience is particularly profound because we recognize the ability of digital technologies and AI to accelerate achievement of sustainable development but at the same time these risks and challenges that other panellists have spoken about there is a topic of today’s conversation are something that we need to build capacity in our program countries and particularly in the global south to combat and so when it comes to all these challenges think about the fact that these are challenges for developed countries so they’re even more profound for developing countries that lack the capacity that a lot of times don’t also the manpower, the resources, the government institutions to address these types of challenges. So we need to start from meeting them at where the urgency of the need is the greatest and build the capacity in these countries, in their policymakers, in their communities, their local ecosystems, to have local solutions to these problems as well. It’s also not a one-size-fits-all, you know, we import a global solution type thing that is developed in the West. We need to think about locally relevant, culturally sensitive, contextual solutions to addressing all of these solutions, these issues effectively. And so that’s where the UNDP comes in, to really think about these aspects, to build as part of a comprehensive approach to digital technologies, and to build the type of digital resilience that we need. In the area of information integrity, for instance, we work in over 10 countries with eMonitor that looks at addressing hate speech and gender-facilitated and technology-facilitated GBV. We have been working in over 10 different countries with iVerify. With enhanced fact-checking, we are building a digital kit for democracy that really includes these types of technologies that can be safe and secure, that can really support our developing country partners in addressing these types of challenges. We welcome working with more stakeholder partners to really make sure, as I’ve said before, these solutions are appropriate to national contexts that really take into account the needs and particularities of our developing country partners, and work to empower the global South as co-creators of the digital future that we want to address these types of challenges, and really make sure that we’re all part of the digital future and the potential of digital AI, and addressing these types of challenges as well.


Salma Abbasi: Thank you very much. So now I move to Dr. Mactar. Dr. Mactar, the question is, what policies and strategies and governance frameworks in African countries are implemented to address the risks of AI-generated misinformation while protecting the freedom of expression and maintaining electoral integrity? Thank you.


Mactar Seck: Thank you very much, Dr. Salma. I think it is a very important question when we look at the African view. In the continent, you have more than 400 million people connected to the social media. And this is a big challenge. Why? Because this misinformation, disinformation, promote a lot of thing in the continent when we look at the issue of violence, of freedom of speech, issue of terrorism, gender violence. All is coming �are coming from this disinformation, misinformation, also on the issue of democracy in several country when you have the election. And at UNHCA, we try to look at two angle. One it is on the policy side, and also on the technical side. What we can do on policy side, we have to look at at the continental level, because misinformation, disinformation is not a national issue. Information can be�come from everywhere. You have several people in the diaspora, and also terrorism is everywhere. We can look at what’s happened in the Sahel region, Mali, Burkina, and Niger as a country. We can look at what is a problem there, where information come, what kind information are sent to the population. And we need to look at first on the policy. On the policy, we look at the continental level. We already work with African Union to develop this AI strategy framework for Africa. Also this continental cybersecurity policy. And also the cybersecurity guideline for member states. And all this framework. We incorporate issue of disinformation, misinformation, and gender violence. It is a one-step at the continental level, and at the national level, we support African countries also to develop their national policy, taking into consideration the ethical impact of the information society. Now we have, we can say, 10�between 10, 14 countries already have a national AI policy incorporating the issue of misinformation, disinformation. Also we have a program on capacity building. And also, the more important, it is awareness. It is something we need to do more across all the member of the society, because everybody now uses information society. You receive information, you don�t know it�s true or not. I�m sure Madam Minister is facing on this. Every day you receive one information, and you don�t know where is this coming. And you can�t go anytime to the news, to the media, to say, no, it�s not coming from me, no. And we also develop one platform with Gambia to also to monitor all this disinformation, fake news. I think it is a first step we are doing, and we see how we can expand also at the continental level. Also, we are engaging�we are very committed under the G20 on the data governance working group to see how we can assist African countries to fight this misinformation, disinformation. Also, we develop some application to monitor the hard speech, like what we did in Kenya and other country. It is some key area where we are focused, but it is not easy. It is not easy to fight against this disinformation, misinformation. We need to do more awareness across�among also population. Thank you.


Salma Abbasi: Thank you very much indeed. So the final panelist is going to be Karen from the IEEE. What are your views on how to address the impact of AI-driven misinformation and digital content that is harmful to young people? Thank you.


IEEE representative: All right. Now it’s on. Well, I’d like to first apologize that Karen McCade can’t make it this morning. She had an urgent matter that came up to address. So I am Karen Mulberry. So you get a Karen from IEEE. And let me tell you a little bit about IEEE itself. It is the world’s largest technical professional organization. We have technical communities in 190 countries and over 500,000 members working on a lot of solutions, working with Indonesia on some of their issues. And we actually work with Salma and her organization on how do you address some critical aspects of misinformation. And one of the areas that we have a body to work on is children, but not necessarily children. It’s what’s age appropriate, because what might be appropriate for a three-year-old might not be the same for a five-year-old, a 10-year-old. And even when you look into the vulnerable aging population, what’s appropriate access to information to them? And do they understand the impact of what’s out there? So how can we, as a technical standards body, approach setting up a process so that when you develop a product, you consider what’s age appropriate. We’ve worked with Five Rites. We’ve worked with eWorldwide. And that kind of started our journey on our first standard on what’s age appropriate design. So if you’re looking at a product, how would you design it to make sure that it’s appropriate for the age group that is in your target? As we have heard, it’s very important to make sure that you get the right information, that it’s trustworthy and responsible for whether it’s the three-year-old, the five-year-old, the 10-year-old, or my 95-year-old father who believes everything on the internet is true. The next standard we have out there that just was released earlier this year is, how do you verify an age of someone to make sure that they only have access to what’s appropriate for them? And so there’s a lot of countries that are looking at that as a possible solution and companion to the law. I know we’ve worked with Indonesia on our age-appropriate design work and our age verification work so that they can approach the framework around a policy and regulation to make sure that children only have access to the things that are good for them. And that they don’t get exposed to things that they shouldn’t be exposed to until they’re old enough to understand what they are. Now, we also, following this progression, and as Dr. Abbasi noted, we are working on a standard now on e-gaming. What’s appropriate in terms of e-gaming, we’ve heard that it actually creates a lot of addiction and attraction and almost a spiral effect. So now we’re trying to figure out the standard approach that a product should be built to consider all of these so that you avoid the misinformation and somehow we can minimize the addiction and other impacts that happen. So thank you very much, and if you would like to join us in the work, please. We would love to have your expertise.


Salma Abbasi: Thank you very much. So now we move to the part of the session. We started a little bit late so we will just go a little bit longer. To the point that Karen mentioned, I have a document here which is a draft guideline for children’s safe engagement on social media and gaming platforms. And it’s the terms and conditions. I don’t know how many people here actually read the terms and conditions or just click accept. Does anybody actually read the terms and conditions? Oh, wow. Good, mashallah, there’s a few. But I want to tell you that we did a survey to understand children’s lived experiences online and what their exposure is to the social media platforms today. And I can tell you that 75% of them say that they don’t read the terms and conditions, that they’re too long. And they have actually then created, this is like my young panelist here, Sami, they have created a guideline. It’s written by children for children so that social media companies and gaming platforms can actually use it. So I think that it would be a very good time to ask a few of you some questions on what you’ve heard and if there’s something that you’d like to share. I know two people in this room that I met a couple of days ago. I’m going to go straight to you. If I can pass this mic, my dear, from a Colombia context.


Audience: Thank you. Good morning, everyone. I’m Claudia Bustamante, the director of CRC, the regulator in Colombia. It’s a great panel you have here. I’ve heard many, many great things. This is a challenge for all our countries and our people. In Colombia, we have approached in different ways, but we have many challenges to… to cope. We have a training for critical thinking. It’s an open course with gamification and very simple language for the people to understand. They need to think about everything they see in multiple screens. You need first to have that kind of critical reasoning to figure it out when something is true or not. Also, we made fieldwork to get data with surveys, and we talked with the caretakers. The children are not all the time with their parents. Maybe they are with another person in their house or in kindergarten or places like that. Then we need to train those people also. It’s not only the responsibility for their parents or for their professors, but these people are very important in the process also. For the technical issues and the conditions, we have a limitation because our faculties, as a regulator, doesn’t reach to platforms, only to traditional service providers. Then we have talked to them. They suggest codes of conduct and some guidelines, but it’s up to them to do it. We can set rules for them, and sometimes this is a limitation to do more things.


Salma Abbasi: Does anybody else have any questions?


Audience: Hello, I’m Dr. Eva Fell, I work for the International Committee of the Red Cross. Obviously, we’re really concerned by misinformation and fake news, but actually my question is more about how much is this going into schools? Isn’t that where we need to be talking to children and giving them critical thinking? I was just wondering if you have…


Salma Abbasi: No, that’s lovely. Lovely question. We actually started in schools. We’re working actively in schools in the UK, actually in seven countries, listening to them and asking them that. So we’re continuously researching the lived experiences. We’ve been working in Malaysia, love to work in Indonesia and in the UAE and inshallah soon in Burkina Faso. It all begins with the children, which is why I think you came in late. I was talking about the brain and the brain development and the impact and there’s a paper that’s going to be online today. I’m just launching it, it’s here. Anybody wants it, please, I’ll post it on my LinkedIn today. It’s called the Hidden Public Health Crisis, the influence of social media and gaming on physiological, psychological and social impact on children’s health. I’m calling the public health crisis. This needs the Ministry of Health, the Ministry of Education, the Ministry of ICT and the Ministry of Justice to work together. It’s a collaborative effort that’s needed to change the education curriculum also and parenting. In the UK with my partner, Sarah Pinnock, we’ve launched a programme on digital parenting. Parents think the kids are in the room when they’re safe. They don’t understand, why do you need your telephone in your bedroom? That’s the first thing. So there’s little tiny tips for the parents and for the children and we start in primary school. We’re trying to teach parents this is not a digital babysitter. It’s a dangerous tool. The iPads you see as you go in shopping centres where three-year-olds are getting autism from being bombarded with this. So there’s a lot of fantastic research that’s talking about this. The thing is that we need to take action. So we will, we will, if there’s any one more question. Ah, because I want you to do the closing. Not right, sorry. Not right now. Sorry, you’re not opening the closing.


Audience: Hi, I also have a question. So, yes, so my name is Carol Constantine and I have a human resources technology company. But my question is, I just saw on the news, Denmark, I don’t know if you are aware, is passing the first comprehensive legislation giving copyrights to each one of us, of our voice, face, and body. So I wanted to share this and ask you, do you think this is the way forward? And what hurdles do you see there?


Salma Abbasi: Very good. Thank you so much for coming and for sharing. I did not know that. I’m delighted to hear that. What we’ve got here… Salma, I have a question. Can I just answer this one? Where is it? Oh, yeah. Oh, Bhavan, excellent. I wanted him to be a panelist. So what we have is we’ve actually surveyed children and asked them what do they want with the social media, with their data. They not only want to own their data, they want to have the right to destroy the data, and they want to know if their data is being sold. Now, funnily enough, it may not be funny, but funnily enough, the only country in the world that hasn’t signed the UN Convention on the Rights of a Child is United States of America. And that’s so funny because even North Korea and, you know, whether you’re South Sudan, every country has signed this. So we need countries like Indonesia, like Australia, to be forward thinkers in this space, like what Denmark is doing. And we need to have a collective action. Like you were saying, you can only say, can you do this? Regulators have the power to switch off. I know that Indonesia, you have taken the control to switch off things. Brazil, there was a lady here from Brazil, Brazil switched off Twitter or X. We have the power. What we don’t do is exercise it. If we do it collectively, and we’re dealing with, you know, $2 trillion on social media platforms, $700 million, I think it’s billion dollars on cosmetic sales. You have to go and see what is happening to girls, their perception of. Nature, beauty, etc. The boys’ perception from the games that are toxic and violent. Even though we have a new standard coming out on gaming, that behavioral toxic behavior, that machoism that’s penetrating every society, whether it’s Austria, Germany, London or Dubai, it’s all over the place. So we do need a collective call for action. And if I may, because I see the time, we’re five minutes over. I’m very aware. We’re going to move to two things. As I mentioned, we have a draft. It’s a draft. It’s by 36 countries so far. I met the regulator from Nigeria and he said, Salma, why isn’t our name on it? So we would love you to join us with the IEEE Working Group to take a look at this. It’s a draft. And if you know me, I’m a revision person, goes up to Rev 21, because it’s only going to get better with your ideas and your eyes on the document. The second thing is that I would like you to read what the children think about terms and conditions and sit down with your children and discuss, because you pay the telephone bill. I had this very gentle, nice conversation. Children don’t work well when you tell them not to do it. It’s got to be reasoning and logic. And I think having young people like Sammy to make videos on how to be safe online, peer-to-peer communication is working with us in schools, secondary school with primaries. You need that trusted adult, young adult to be the coach. And then the final thing, which is very important for me, is to spread the word that everything online is not true. I’m a professor of ethical AI. Please believe me that rubbish is progressing with rubbish as misinformation is missing out all of our human values. If it’s not digitized, it doesn’t exist. It means when I have a cold and I have ginger and lemon, because my mother said it, unless it’s online, we will never use it. So those hidden recipes and remedies are really As you can see, we have a very, very important task. And what I would like to do is ask two wonderful people to give the closing remark. Just two minutes. One is Sami. And then with my dear younger sister Gitanjali from the, the ITU, who is organizing this wonderful Wishes Forum, to please say the closing remarks. So, sir.


Sami Galal: So, to solve this major problem that’s been becoming more and more frequent in modern society, I have a few recommendations. So starting off with parents, which I consider to be the first line of defense, parents should know not to give the phone to the child. As Ms. Salma said, it’s like a digital babysitter. So even when the parent has to go and do an activity and it’s easier to give the phone to the child, they should only do that as a last resort after trying some more interactive activities to keep the child occupied while they have to do their things. Then moving on to the governments, which I consider to be the second line of defense. Governments should raise awareness to the parents and to the children. They should raise awareness to the parents by telling them what the risks of their child being online is and what they can do to prevent it. We know that parents don’t specifically have that much time in their day-to-day lives. So they could do it in like a journal type of way or on a radio show that they could listen to on the way to work. So it’s non-time consuming, but they could be informed about that. And for the children, once they do find themselves online, the terms and conditions should be interactive in a way for the child to actually understand it and understand how much time he should stay online and what the risks are that he might encounter while being online. Additionally, the private sectors have a moral responsibility, like the gaming companies. and the social media companies, they have to be morally correct and to inform these young children on the dangers that they could face online. So although this is a tough task, with the help of everyone, we can make screens in positive experience with no negative outcome for these young children.


Salma Abbasi: Absolutely wonderful. I think this is really lovely and a holistic way of looking at the situation and of how we move forward. And I think establishing and enforcing a duty of care for the regulators, the families and all stakeholders, supporting independent interdisciplinary research, so we can really understand what is going on and how to address it best in schools and in our homes. Investing in global digital literacy and public awareness campaigns is a must in every language we can think of. And ensuring multi-stakeholder engagement, and most importantly, youth participation, not just to sit here, but to actively write and edit the reports that we’re producing, which is so important. So over to you, Gitanjali.


Gitanjali Sah: Thank you, Dr. Abasi, and apologies that Secretary General could not join you. She’s running around all over AI and VISIS. So this is really important work, and I think you also pointed on educators, really important that the teachers are guiding our children as well. I am also one of those parents who are guilty as charged using the mobile as a babysitter. It is a fact that the parents and the educators, because in my child’s school, they really have classes, they have lessons which tell them about what is wrong in the internet, the darknet, and the awareness that we spoke about. There was this episode in Netflix about adolescence, and that created so much awareness amongst so many of us. It was like a real shocker that this is happening. And I think this made me I also wanted to talk about the biases in the AI algorithms. I don’t know if you covered that, but yes, so that is another very important part, especially for gender discrimination. And of course now for children, older persons, and also really the engineers have to be aware. He said it’s a moral responsibility. The private sector engineers have to be involved in it. And really the moral responsibility of the global community. So the United Nations ITU is committed to continue providing you with these kind of platforms to get together, to ensure that these dialogues are happening, but they don’t just remain dialogues. They are calls for action. And for example, the outcomes of this high-level event are going into the United Nations General Assembly, the UNGA overall review, where we can actually make a difference. So really, Dr. Abbasi, if you have a call for action coming out of this meeting, it will go into the chair’s report and we will put it into the UNGA overall review. Thank you so much.


Salma Abbasi: Thank you. Thank you so much. I would like to thank my panelists before they all run away, that it was really wonderful, all the different contexts that you’ve shared. And I’d like to have one picture with you all. Thank you.


S

Sami Galal

Speech speed

171 words per minute

Speech length

682 words

Speech time

238 seconds

Screen time negatively affects multiple brain regions including prefrontal cortex, hippocampus, and visual cortex, leading to emotional regulation problems, learning difficulties, and vision issues

Explanation

Sami’s research on children aged 0-5 found that screens impact various parts of the brain negatively. The prefrontal cortex disruption leads to trouble regulating emotional behavior and social interactions, hippocampus disruption causes problems with learning and memory processing, and visual cortex disruption leads to myopia.


Evidence

Research focused on children aged 0-5; prefrontal cortex is one of the last brain areas to mature and serves as the personality center; hippocampus handles learning and memory tasks used in schoolwork; visual cortex processes visual information and disruption causes myopia (near-sightedness)


Major discussion point

Impact of Digital Technologies on Children’s Development


Topics

Child safety online | Children rights | Online education


Agreed with

– Salma Abbasi
– Elise Elena Mola

Agreed on

Digital platforms exploit children’s vulnerabilities and cause developmental harm


Terms and conditions should be interactive and understandable for children to comprehend online risks and time limits

Explanation

As part of his recommendations for addressing digital risks, Sami argues that when children find themselves online, the terms and conditions should be presented in an interactive way that helps them understand both the time they should spend online and the potential risks they might encounter.


Evidence

Part of a broader recommendation system including parental responsibility and government awareness campaigns


Major discussion point

Technical Standards and Solutions


Topics

Child safety online | Children rights | Content policy


Parents, governments, and private sector companies all have moral responsibilities as different lines of defense

Explanation

Sami proposes a multi-layered defense system where parents serve as the first line of defense by avoiding using phones as digital babysitters, governments act as the second line by raising awareness, and private sector companies have moral responsibilities to inform children about online dangers.


Evidence

Parents should try interactive activities before resorting to phones; governments can use time-efficient methods like radio shows; gaming and social media companies must be morally correct in informing children


Major discussion point

Call for Collective Action


Topics

Child safety online | Children rights | Consumer protection


Agreed with

– Salma Abbasi
– Aminata Zerbo
– Gitanjali Sah

Agreed on

Multi-stakeholder responsibility and collective action needed


S

Salma Abbasi

Speech speed

150 words per minute

Speech length

2865 words

Speech time

1139 seconds

Digital platforms exploit children’s psychological vulnerabilities through dopamine-driven algorithms that disrupt brain development and promote instant gratification

Explanation

Salma argues that digital platforms are designed to exploit children’s psychological well-being by stimulating their vulnerabilities through algorithms that trigger dopamine responses. This disrupts normal brain development and creates patterns of instant gratification that cause developmental problems.


Evidence

Research on dopamine-driven algorithms; links to screen time overuse and poor sleep; hidden developmental issues not widely covered; design exploitation of psychological vulnerabilities


Major discussion point

Impact of Digital Technologies on Children’s Development


Topics

Child safety online | Children rights | Content policy


Agreed with

– Sami Galal
– Elise Elena Mola

Agreed on

Digital platforms exploit children’s vulnerabilities and cause developmental harm


Technology-facilitated gender-based violence and exposure to toxic content is normalizing violence and creating harmful attitudes toward gender

Explanation

Salma contends that exposure to misogyny, hate speech, and toxic culture online through social media and gaming is normalizing violence and creating regressive attitudes toward gender. This is contributing to increased technology-facilitated gender-based violence and violent incidents among children.


Evidence

UK experienced drastic polarization due to misinformation; exposure to misogynistic content and hate speech; violent stabbing and shooting incidents increasing globally as children blend online and offline worlds


Major discussion point

Impact of Digital Technologies on Children’s Development


Topics

Child safety online | Gender rights online | Content policy


Self-harm and suicide rates show alarming 96% correlation with cases involving either harming others or self-harm

Explanation

Based on research across 14 countries, Salma presents alarming statistics showing that 96% of cases studied involve either children killing someone else or engaging in self-harm/suicide. She characterizes this as a hidden public health crisis requiring immediate action.


Evidence

Synthesis of cases across 14 countries; 96% statistic of cases involving killing others or self-harm; calls it a public health crisis


Major discussion point

Impact of Digital Technologies on Children’s Development


Topics

Child safety online | Children rights | Human rights principles


Children want ownership of their data, the right to destroy it, and transparency about data sales

Explanation

Through surveys of children’s lived experiences, Salma found that children not only want to own their data but also want the right to destroy their data and to know when their data is being sold. This represents children’s desire for greater control over their digital footprint.


Evidence

Survey of children’s lived experiences online; children’s specific requests regarding data ownership, destruction rights, and transparency about data sales


Major discussion point

Technical Standards and Solutions


Topics

Children rights | Privacy and data protection | Consumer protection


Multi-stakeholder engagement including youth participation in writing and editing reports is crucial for effective solutions

Explanation

Salma emphasizes that addressing digital threats requires not just multi-stakeholder engagement but active youth participation where young people don’t just attend meetings but actively contribute to writing and editing the reports and solutions being developed.


Evidence

Draft guidelines written by children for children; 36 countries participating in collaborative efforts; youth-led content creation for terms and conditions


Major discussion point

Call for Collective Action


Topics

Children rights | Interdisciplinary approaches | Human rights principles


Agreed with

– Sami Galal
– Aminata Zerbo
– Gitanjali Sah

Agreed on

Multi-stakeholder responsibility and collective action needed


A

Aminata Zerbo

Speech speed

86 words per minute

Speech length

451 words

Speech time

311 seconds

AI-generated content including deepfakes poses serious threats to public trust and democratic processes, especially affecting vulnerable populations

Explanation

Minister Zerbo argues that the ability to manipulate images, sounds, facts, and emotions through AI technologies like deepfakes is no longer science fiction but presents major challenges to society. These technologies weaken truth, destabilize institutions, and spread falsified content, particularly affecting vulnerable populations like children.


Evidence

Context of geopolitical instability; digital environment as strategic sphere of influence; children exposed to manipulative content, toxic games, and deviant influencers; emergence of digital culture marked by misogyny and sexual exploitation


Major discussion point

Misinformation and Disinformation Challenges


Topics

Child safety online | Content policy | Human rights principles


Burkina Faso is implementing structural actions including awareness campaigns and developing AI ethics regulations while strengthening international cooperation

Explanation

In response to digital challenges, Burkina Faso’s government is taking comprehensive action including awareness-raising campaigns for students, developing legal regulations for ethical AI use, and strengthening cooperation with international partners. The goal is to create human-centered AI that serves social cohesion rather than manipulation.


Evidence

Awareness campaigns for school and high school students; legal framework development for ethical AI use respecting fundamental rights, transparency, and accountability; cooperation with international partners for detection and response mechanisms


Major discussion point

Regulatory and Policy Responses


Topics

Legal and regulatory | Human rights principles | Capacity development


Responses to digital threats must be collective, united, and adapted to respective national contexts rather than one-size-fits-all solutions

Explanation

Minister Zerbo emphasizes that addressing digital threats requires collective action that is adapted to different national contexts rather than importing global solutions developed elsewhere. She advocates for strengthening national capacities, regional cooperation, and harmonizing ethical standards as common tools.


Evidence

Burkina Faso’s commitment to contribute to trusted digital space; emphasis on protecting youth and guaranteeing human rights; focus on strengthening national capacities and regional cooperation


Major discussion point

Call for Collective Action


Topics

Human rights principles | Capacity development | Interdisciplinary approaches


Agreed with

– Yu Ping Chan
– Mactar Seck
– Audience

Agreed on

Need for locally adapted, contextually appropriate solutions


F

Fifi Aleyda Yahya

Speech speed

116 words per minute

Speech length

503 words

Speech time

259 seconds

Indonesia has adopted comprehensive approaches including delaying social media access to 17-18 years old and establishing AI ethics principles

Explanation

Indonesia has implemented a multi-faceted approach to address AI-driven misinformation, including regulatory measures that delay independent social media access for teenagers until 17-18 years old (requiring parental supervision before that age) and establishing ethical AI principles for responsible, transparent, and human-centered AI development.


Evidence

Government regulation protecting children’s rights and safety; delaying social media access to 17-18 years old while allowing gadget use; AI ethics guidelines establishing core principles for responsible AI development; collaboration with digital platforms and international partners


Major discussion point

Regulatory and Policy Responses


Topics

Child safety online | Children rights | Legal and regulatory


Disagreed with

– Elise Elena Mola

Disagreed on

Regulatory approach – age restrictions vs. comprehensive frameworks


A

AHM Bazlur Rahman

Speech speed

101 words per minute

Speech length

377 words

Speech time

223 seconds

Misinformation and disinformation pollute the entire information ecosystem and threaten human progress by promoting violent extremism and undermining democracies

Explanation

Rahman argues that misinformation and disinformation fundamentally compromise information integrity by polluting the entire information ecosystem. This pollution has serious consequences including promoting violent extremism, hate speech, polarizing public opinion, and ultimately undermining democratic processes and reducing trust in democratic institutions.


Evidence

Information integrity defined as accuracy, consistency, and reliability; propaganda and fake news potential to polarize opinion and promote violent extremism; impact on democratic processes and public trust


Major discussion point

Misinformation and Disinformation Challenges


Topics

Content policy | Human rights principles | Freedom of expression


Youth resilience programs in Bangladesh use hyperlocal Facebook page development and social media training to counter misinformation at grassroots level

Explanation

Bangladesh has implemented a comprehensive youth resilience program that operates at both national and grassroots levels. The grassroots approach uses hyperlocal methods, including Facebook page development for rural youth and social media training to counter misinformation through community engagement.


Evidence

Youth Resilience to Misinformation project with national and grassroots components; hyperlocal method using Facebook pages; identification of youth communities; social media stakeholder analysis; orientation programs for critical analysis skills; audiovisual content development


Major discussion point

Grassroots and Community Initiatives


Topics

Capacity development | Digital access | Online education


E

Elise Elena Mola

Speech speed

106 words per minute

Speech length

245 words

Speech time

138 seconds

Current EU AI Act requirements focus on corporate efficiency rather than teaching society how to interact safely with AI systems

Explanation

Mola identifies a significant disconnect between policy requirements and societal needs. While the EU AI Act includes AI literacy requirements, these focus on helping corporations use AI tools for efficiency and automation rather than teaching people the critical AI skills needed to understand how algorithms manipulate perception and behavior on social media platforms.


Evidence

Experience helping companies implement EU AI Act; AI literacy requirements focused on corporate efficiency; lack of skills to understand algorithmic manipulation on social media


Major discussion point

Regulatory and Policy Responses


Topics

Legal and regulatory | Online education | Human rights principles


Disagreed with

– Fifi Aleyda Yahya

Disagreed on

Regulatory approach – age restrictions vs. comprehensive frameworks


AI algorithms on platforms like TikTok can expose young men to extreme right-wing propaganda and misogynistic content within 20 minutes

Explanation

Mola cites research showing how quickly AI algorithms can manipulate users by exposing them to extreme content. The algorithms are designed to maximize screen time by playing to users’ most vulnerable evolutionary aspects, creating an ‘outrage machine’ that keeps people engaged through anger and hate.


Evidence

Studies showing exposure to extreme right-wing propaganda, violence, and misogynist content within 20 minutes on TikTok; algorithms designed to maximize screen time; creation of ‘outrage machine’ through emotional manipulation


Major discussion point

Misinformation and Disinformation Challenges


Topics

Child safety online | Content policy | Gender rights online


Agreed with

– Sami Galal
– Salma Abbasi

Agreed on

Digital platforms exploit children’s vulnerabilities and cause developmental harm


Y

Yu Ping Chan

Speech speed

189 words per minute

Speech length

485 words

Speech time

153 seconds

UNDP supports over 10 countries with tools like eMonitor and iVerify to address hate speech and technology-facilitated gender-based violence

Explanation

UNDP, as the development arm of the UN present in 170 countries, provides comprehensive support to developing countries in building digital resilience. They work with over 10 countries using specific tools like eMonitor for hate speech and technology-facilitated gender-based violence, and iVerify for enhanced fact-checking, while developing a digital kit for democracy.


Evidence

UNDP presence in 170 countries; eMonitor tool for hate speech and technology-facilitated GBV; iVerify for fact-checking; digital kit for democracy development; focus on locally relevant, culturally sensitive solutions


Major discussion point

Grassroots and Community Initiatives


Topics

Capacity development | Gender rights online | Human rights principles


Agreed with

– Aminata Zerbo
– Mactar Seck
– Audience

Agreed on

Need for locally adapted, contextually appropriate solutions


M

Mactar Seck

Speech speed

128 words per minute

Speech length

521 words

Speech time

244 seconds

Over 400 million people connected to social media in Africa face challenges from misinformation promoting violence, terrorism, and gender violence

Explanation

Dr. Seck highlights the scale of the challenge in Africa, where over 400 million social media users are exposed to misinformation and disinformation that promotes various forms of violence, terrorism, and gender-based violence. He emphasizes that this is not just a national issue but a continental challenge affecting democratic processes and security.


Evidence

400+ million social media users in Africa; misinformation promoting violence, terrorism, and gender violence; impact on democracy and elections; examples from Sahel region including Mali, Burkina Faso, and Niger; diaspora influence on information spread


Major discussion point

Misinformation and Disinformation Challenges


Topics

Content policy | Gender rights online | Violent extremism


African countries are developing continental AI strategy frameworks and national policies incorporating misinformation and gender violence considerations

Explanation

UNECA is working at both continental and national levels to address misinformation challenges. They have developed continental frameworks including AI strategy, cybersecurity policy, and guidelines for member states, while supporting 10-14 countries in developing national AI policies that specifically address misinformation, disinformation, and gender violence.


Evidence

AI strategy framework for Africa; continental cybersecurity policy; cybersecurity guidelines for member states; 10-14 countries with national AI policies; platform development with Gambia for monitoring disinformation; applications for monitoring hate speech in Kenya


Major discussion point

Regulatory and Policy Responses


Topics

Legal and regulatory | Capacity development | Gender rights online


Agreed with

– Aminata Zerbo
– Yu Ping Chan
– Audience

Agreed on

Need for locally adapted, contextually appropriate solutions


I

IEEE representative

Speech speed

138 words per minute

Speech length

522 words

Speech time

226 seconds

IEEE is developing standards for age-appropriate design, age verification, and e-gaming to ensure products consider what’s appropriate for different age groups

Explanation

IEEE, as the world’s largest technical professional organization with 500,000+ members in 190 countries, is developing comprehensive technical standards to address age-appropriate access to digital content. Their standards cover age-appropriate design principles, age verification methods, and e-gaming guidelines to ensure products are designed with appropriate safeguards for different age groups.


Evidence

IEEE membership of 500,000+ in 190 countries; collaboration with Five Rights and eWorldwide; age-appropriate design standard; age verification standard released earlier in the year; upcoming e-gaming standard; work with Indonesia on implementation


Major discussion point

Technical Standards and Solutions


Topics

Digital standards | Child safety online | Children rights


G

Gitanjali Sah

Speech speed

167 words per minute

Speech length

316 words

Speech time

113 seconds

Outcomes from discussions should translate into actionable policies at international forums like the UN General Assembly

Explanation

Gitanjali emphasizes that the ITU is committed to ensuring that dialogues don’t remain just conversations but become calls for action. She explains that outcomes from high-level events like this workshop feed into the UN General Assembly overall review process, where real policy differences can be made at the international level.


Evidence

ITU commitment to providing platforms for dialogue; outcomes feeding into UN General Assembly overall review; chair’s report integration; potential for real policy impact at international level


Major discussion point

Call for Collective Action


Topics

Human rights principles | Interdisciplinary approaches | Legal and regulatory


Agreed with

– Sami Galal
– Salma Abbasi
– Aminata Zerbo

Agreed on

Multi-stakeholder responsibility and collective action needed


A

Audience

Speech speed

130 words per minute

Speech length

387 words

Speech time

177 seconds

Training programs for critical thinking and working with caretakers beyond parents are essential components of comprehensive approaches

Explanation

An audience member from Colombia’s regulatory authority shared their multi-faceted approach which includes training for critical thinking through gamified courses and recognizing that children aren’t always with parents. They emphasize the need to train all caretakers including those in kindergartens and other care settings, as these individuals play crucial roles in children’s digital safety.


Evidence

Colombia’s open course with gamification for critical thinking; fieldwork and surveys with caretakers; recognition that children are often with other caretakers beyond parents; training for kindergarten staff and other caregivers


Major discussion point

Grassroots and Community Initiatives


Topics

Online education | Capacity development | Children rights


Agreed with

– Aminata Zerbo
– Yu Ping Chan
– Mactar Seck

Agreed on

Need for locally adapted, contextually appropriate solutions


Agreements

Agreement points

Multi-stakeholder responsibility and collective action needed

Speakers

– Sami Galal
– Salma Abbasi
– Aminata Zerbo
– Gitanjali Sah

Arguments

Parents, governments, and private sector companies all have moral responsibilities as different lines of defense


Multi-stakeholder engagement including youth participation in writing and editing reports is crucial for effective solutions


Responses to digital threats must be collective, united, and adapted to respective national contexts rather than one-size-fits-all solutions


Outcomes from discussions should translate into actionable policies at international forums like the UN General Assembly


Summary

All speakers agree that addressing digital threats requires coordinated action across multiple stakeholders including parents, governments, private sector, and international organizations, with each having distinct but complementary responsibilities


Topics

Human rights principles | Children rights | Interdisciplinary approaches


Digital platforms exploit children’s vulnerabilities and cause developmental harm

Speakers

– Sami Galal
– Salma Abbasi
– Elise Elena Mola

Arguments

Screen time negatively affects multiple brain regions including prefrontal cortex, hippocampus, and visual cortex, leading to emotional regulation problems, learning difficulties, and vision issues


Digital platforms exploit children’s psychological vulnerabilities through dopamine-driven algorithms that disrupt brain development and promote instant gratification


AI algorithms on platforms like TikTok can expose young men to extreme right-wing propaganda and misogynistic content within 20 minutes


Summary

There is strong consensus that digital platforms are designed in ways that exploit children’s psychological and neurological vulnerabilities, causing measurable harm to brain development and exposing them to harmful content


Topics

Child safety online | Children rights | Content policy


Need for locally adapted, contextually appropriate solutions

Speakers

– Aminata Zerbo
– Yu Ping Chan
– Mactar Seck
– Audience

Arguments

Responses to digital threats must be collective, united, and adapted to respective national contexts rather than one-size-fits-all solutions


UNDP supports over 10 countries with tools like eMonitor and iVerify to address hate speech and technology-facilitated gender-based violence


African countries are developing continental AI strategy frameworks and national policies incorporating misinformation and gender violence considerations


Training programs for critical thinking and working with caretakers beyond parents are essential components of comprehensive approaches


Summary

Speakers from different regions emphasize the importance of developing solutions that are adapted to local contexts, cultures, and specific national challenges rather than importing one-size-fits-all approaches


Topics

Capacity development | Human rights principles | Interdisciplinary approaches


Similar viewpoints

All three speakers identify technology-facilitated gender-based violence and exposure to misogynistic content as major concerns, with algorithms actively promoting harmful gender attitudes and violent content

Speakers

– Salma Abbasi
– Mactar Seck
– Elise Elena Mola

Arguments

Technology-facilitated gender-based violence and exposure to toxic content is normalizing violence and creating harmful attitudes toward gender


Over 400 million people connected to social media in Africa face challenges from misinformation promoting violence, terrorism, and gender violence


AI algorithms on platforms like TikTok can expose young men to extreme right-wing propaganda and misogynistic content within 20 minutes


Topics

Gender rights online | Content policy | Child safety online


These speakers share a focus on age-appropriate access and design, emphasizing the need for technical standards and regulatory approaches that consider developmental appropriateness and children’s rights regarding their digital experiences

Speakers

– Fifi Aleyda Yahya
– IEEE representative
– Salma Abbasi

Arguments

Indonesia has adopted comprehensive approaches including delaying social media access to 17-18 years old and establishing AI ethics principles


IEEE is developing standards for age-appropriate design, age verification, and e-gaming to ensure products consider what’s appropriate for different age groups


Children want ownership of their data, the right to destroy it, and transparency about data sales


Topics

Children rights | Digital standards | Child safety online


These speakers from developing countries share concerns about misinformation and disinformation as threats to democratic processes, social stability, and security, particularly in contexts of political instability

Speakers

– AHM Bazlur Rahman
– Aminata Zerbo
– Mactar Seck

Arguments

Misinformation and disinformation pollute the entire information ecosystem and threaten human progress by promoting violent extremism and undermining democracies


AI-generated content including deepfakes poses serious threats to public trust and democratic processes, especially affecting vulnerable populations


Over 400 million people connected to social media in Africa face challenges from misinformation promoting violence, terrorism, and gender violence


Topics

Content policy | Human rights principles | Violent extremism


Unexpected consensus

Youth as active solution creators rather than passive recipients

Speakers

– Sami Galal
– Salma Abbasi
– Gitanjali Sah

Arguments

Terms and conditions should be interactive and understandable for children to comprehend online risks and time limits


Multi-stakeholder engagement including youth participation in writing and editing reports is crucial for effective solutions


Outcomes from discussions should translate into actionable policies at international forums like the UN General Assembly


Explanation

Unexpectedly, there was strong consensus on positioning young people not just as victims needing protection, but as active participants in creating solutions, writing guidelines, and contributing to policy development. This represents a shift from traditional protective approaches to empowerment-based strategies


Topics

Children rights | Human rights principles | Interdisciplinary approaches


Private sector moral responsibility beyond regulatory compliance

Speakers

– Sami Galal
– Salma Abbasi
– Elise Elena Mola

Arguments

Parents, governments, and private sector companies all have moral responsibilities as different lines of defense


Digital platforms exploit children’s psychological vulnerabilities through dopamine-driven algorithms that disrupt brain development and promote instant gratification


Current EU AI Act requirements focus on corporate efficiency rather than teaching society how to interact safely with AI systems


Explanation

There was unexpected consensus that private sector companies have moral responsibilities that go beyond legal compliance, with speakers calling for fundamental changes in how platforms are designed rather than just regulatory oversight. This represents a shift toward ethical business practices as a core requirement


Topics

Children rights | Content policy | Human rights principles


Overall assessment

Summary

The discussion revealed remarkably strong consensus across diverse speakers from different regions and sectors on key issues: the need for multi-stakeholder collective action, the harmful impact of current digital platform designs on children, the importance of locally adapted solutions, and the positioning of youth as active solution creators. There was also unexpected agreement on private sector moral responsibility and the inadequacy of current regulatory approaches.


Consensus level

High level of consensus with significant implications for policy development. The agreement spans technical, regulatory, and ethical dimensions, suggesting a mature understanding of the challenges and potential for coordinated global action. The consensus on youth empowerment and private sector moral responsibility indicates potential for innovative approaches that go beyond traditional regulatory frameworks. This level of agreement among diverse stakeholders suggests strong foundation for developing comprehensive, multi-faceted solutions to digital threats facing children.


Differences

Different viewpoints

Regulatory approach – age restrictions vs. comprehensive frameworks

Speakers

– Fifi Aleyda Yahya
– Elise Elena Mola

Arguments

Indonesia has adopted comprehensive approaches including delaying social media access to 17-18 years old and establishing AI ethics principles


Current EU AI Act requirements focus on corporate efficiency rather than teaching society how to interact safely with AI systems


Summary

Indonesia advocates for specific age-based restrictions (delaying social media access until 17-18), while the EU approach focuses on corporate compliance and efficiency rather than user education and safety skills


Topics

Legal and regulatory | Child safety online | Children rights


Unexpected differences

Focus on technical standards vs. policy implementation

Speakers

– IEEE representative
– Elise Elena Mola

Arguments

IEEE is developing standards for age-appropriate design, age verification, and e-gaming to ensure products consider what’s appropriate for different age groups


Current EU AI Act requirements focus on corporate efficiency rather than teaching society how to interact safely with AI systems


Explanation

Unexpectedly, both speakers work in technical/regulatory spaces but have different perspectives – IEEE focuses on creating technical standards for age-appropriate design, while Mola criticizes existing regulations for missing the mark on actual user safety education


Topics

Digital standards | Legal and regulatory | Child safety online


Overall assessment

Summary

The discussion showed remarkable consensus on identifying problems (misinformation threats, child safety concerns, need for collective action) but revealed subtle disagreements on implementation approaches – ranging from age-based restrictions vs. education-focused solutions, international frameworks vs. locally-adapted responses, and technical standards vs. policy reform


Disagreement level

Low to moderate disagreement level with high consensus on problems but divergent views on solutions. This suggests a mature policy discussion where stakeholders agree on challenges but bring different expertise and contextual perspectives to solutions, which could be complementary rather than conflicting if properly coordinated


Partial agreements

Partial agreements

Similar viewpoints

All three speakers identify technology-facilitated gender-based violence and exposure to misogynistic content as major concerns, with algorithms actively promoting harmful gender attitudes and violent content

Speakers

– Salma Abbasi
– Mactar Seck
– Elise Elena Mola

Arguments

Technology-facilitated gender-based violence and exposure to toxic content is normalizing violence and creating harmful attitudes toward gender


Over 400 million people connected to social media in Africa face challenges from misinformation promoting violence, terrorism, and gender violence


AI algorithms on platforms like TikTok can expose young men to extreme right-wing propaganda and misogynistic content within 20 minutes


Topics

Gender rights online | Content policy | Child safety online


These speakers share a focus on age-appropriate access and design, emphasizing the need for technical standards and regulatory approaches that consider developmental appropriateness and children’s rights regarding their digital experiences

Speakers

– Fifi Aleyda Yahya
– IEEE representative
– Salma Abbasi

Arguments

Indonesia has adopted comprehensive approaches including delaying social media access to 17-18 years old and establishing AI ethics principles


IEEE is developing standards for age-appropriate design, age verification, and e-gaming to ensure products consider what’s appropriate for different age groups


Children want ownership of their data, the right to destroy it, and transparency about data sales


Topics

Children rights | Digital standards | Child safety online


These speakers from developing countries share concerns about misinformation and disinformation as threats to democratic processes, social stability, and security, particularly in contexts of political instability

Speakers

– AHM Bazlur Rahman
– Aminata Zerbo
– Mactar Seck

Arguments

Misinformation and disinformation pollute the entire information ecosystem and threaten human progress by promoting violent extremism and undermining democracies


AI-generated content including deepfakes poses serious threats to public trust and democratic processes, especially affecting vulnerable populations


Over 400 million people connected to social media in Africa face challenges from misinformation promoting violence, terrorism, and gender violence


Topics

Content policy | Human rights principles | Violent extremism


Takeaways

Key takeaways

Digital technologies are causing a ‘hidden public health crisis’ affecting children’s physiological, psychological, and social development through screen time exposure and dopamine-driven algorithms


AI-generated misinformation and disinformation pose serious threats to democratic processes, public trust, and social cohesion, particularly affecting vulnerable populations including children


Technology-facilitated gender-based violence and exposure to toxic online content is normalizing violence and creating harmful societal attitudes


Current regulatory frameworks and corporate AI literacy requirements are insufficient – they focus on efficiency rather than teaching society safe AI interaction


Collective, multi-stakeholder action is essential, requiring collaboration between parents, governments, private sector, and international organizations


Age-appropriate design standards and interactive terms and conditions are needed to protect children online


Local, culturally sensitive solutions are more effective than one-size-fits-all global approaches


Youth participation in developing solutions and peer-to-peer education programs show promising results


Resolutions and action items

Expand the draft guidelines for children’s safe engagement on social media and gaming platforms beyond the current 36 countries with IEEE Working Group collaboration


Launch the ‘Hidden Public Health Crisis’ report online and distribute widely to raise awareness


Implement digital parenting education programs and peer-to-peer communication initiatives in schools


Develop interactive terms and conditions that children can understand, written by children for children


Submit outcomes from this discussion to the UN General Assembly overall review through the chair’s report


Establish multi-ministry collaboration involving Health, Education, ICT, and Justice departments to address the crisis holistically


Create global digital literacy and public awareness campaigns in multiple languages


Develop technical standards for age-appropriate design, age verification, and e-gaming through IEEE


Unresolved issues

Limited regulatory authority over international platforms – many regulators can only influence traditional service providers, not global social media platforms


Enforcement mechanisms for collective action against trillion-dollar social media companies remain unclear


Specific implementation timelines and funding mechanisms for proposed initiatives were not established


How to balance freedom of expression with protection from harmful content across different cultural contexts


Technical challenges in age verification and content filtering without compromising privacy


Addressing the gap between policy discussions and actual corporate implementation practices


Scaling successful local initiatives to global implementation


Suggested compromises

Delaying social media access to 17-18 years old rather than complete bans, allowing supervised access before that age


Using codes of conduct and guidelines for platforms where direct regulation is not possible


Implementing gradual awareness campaigns through accessible formats like radio shows and journals for time-constrained parents


Developing locally relevant, culturally sensitive solutions rather than imposing universal standards


Creating voluntary industry standards through organizations like IEEE while working toward regulatory frameworks


Focusing on education and critical thinking skills alongside technological solutions


Thought provoking comments

I am calling this a public health crisis and a hidden public health crisis. And the synthesis of all of the cases that we have covered over 14 countries show an alarming significance of self-harm and suicide. And it’s 96%, which is a very big problem of those cases that exist are either killing somebody or killing yourself.

Speaker

Salma Abbasi


Reason

This comment reframes the entire discussion by positioning digital misinformation and harmful content not as a technology or education issue, but as a public health emergency. The 96% statistic linking cases to self-harm or violence against others is particularly striking and elevates the urgency of the problem beyond typical policy discussions.


Impact

This framing set the tone for the entire workshop, establishing the gravity of the situation and justifying the need for multi-ministerial collaboration (Health, Education, ICT, Justice). It influenced subsequent speakers to address the issue with corresponding urgency and seriousness.


When you’re on screens, you don’t have to put the same effort as when you’re doing physical activities, which just makes the reward that more enjoyable, as you don’t have to put in the work and you get the same effect.

Speaker

Sami Galal


Reason

This insight from a young person provides a peer perspective on the neurological addiction mechanism that adults often struggle to articulate. The comment demonstrates sophisticated understanding of dopamine pathways and reward systems, showing how children themselves can understand and explain the science behind their own digital experiences.


Impact

This comment validated the scientific approach to the discussion while demonstrating that young people can be active participants in understanding and solving the problem, not just passive victims. It reinforced the importance of peer-to-peer education and youth participation in solutions.


There was some interesting studies done, for example, that within 20 minutes of using TikTok, young men are shown extreme right-wing propaganda, violence, misogynist content. And what we’re missing is understanding, you know, how is the AI algorithm manipulating us and playing to our most vulnerable evolutionary aspects in order to maximize our screen time.

Speaker

Elise Elena Mola


Reason

This comment bridges the gap between technical AI implementation in corporations and real-world social consequences. The specific timeframe (20 minutes) makes the threat tangible and immediate, while connecting algorithmic manipulation to evolutionary psychology adds depth to understanding why these systems are so effective.


Impact

This shifted the discussion from general concerns about misinformation to specific, measurable algorithmic manipulation tactics. It highlighted the disconnect between corporate AI governance (focused on efficiency) and societal AI literacy needs (focused on recognizing manipulation).


We are delaying the age for our teenagers to be able to access social media. But we’re not banning them to use the gadget, just delaying them 17 to 18 years old. So they can, at that age, after that age, they can access social media independently without parents’ supervision.

Speaker

Fifi Aleyda Yahya


Reason

This represents a concrete, implemented policy solution that balances protection with rights. The distinction between device access and social media access shows nuanced policy thinking that addresses developmental concerns while maintaining digital literacy opportunities.


Impact

This provided a practical example of how countries can take proactive regulatory action, influencing the discussion toward concrete policy solutions rather than just problem identification. It demonstrated that protective measures don’t require complete digital restriction.


The only country in the world that hasn’t signed the UN Convention on the Rights of a Child is United States of America. And that’s so funny because even North Korea and, you know, whether you’re South Sudan, every country has signed this.

Speaker

Salma Abbasi


Reason

This comment exposes a fundamental irony in global child protection efforts and challenges assumptions about which countries lead in children’s rights. It highlights how geopolitical considerations can override child welfare concerns even in developed nations.


Impact

This observation reframed the discussion from developed vs. developing country perspectives to show that leadership in child protection can come from unexpected sources. It empowered Global South participants to see themselves as potential leaders rather than followers in this space.


Parents should know not to give the phone to the child. As Ms. Salma said, it’s like a digital babysitter. So even when the parent has to go and do an activity and it’s easier to give the phone to the child, they should only do that as a last resort after trying some more interactive activities.

Speaker

Sami Galal


Reason

This comment from a young person directly addressing parental behavior is particularly powerful because it comes from the demographic being protected. The practical acknowledgment of parental convenience while still advocating for limits shows mature understanding of real-world constraints.


Impact

This shifted the discussion from top-down policy solutions to ground-level behavioral changes, emphasizing that effective solutions require changes in daily family practices, not just regulatory frameworks.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from a typical technology policy conversation to a multi-dimensional crisis requiring urgent, coordinated action. The framing as a ‘hidden public health crisis’ established the gravity and cross-sectoral nature of required solutions. The inclusion of youth voices, particularly Sami’s scientific and practical insights, demonstrated that effective solutions must include those most affected. The concrete policy examples from Indonesia and technical insights about algorithmic manipulation provided both hope and specificity to the discussion. Together, these comments created a comprehensive framework that moved beyond problem identification to actionable, multi-stakeholder solutions while maintaining focus on the most vulnerable populations. The discussion successfully balanced scientific rigor, policy practicality, and human impact, largely due to these pivotal contributions that each added essential dimensions to understanding and addressing the crisis.


Follow-up questions

How can we better understand the hidden developmental issues not currently covered in screen time research?

Speaker

Salma Abbasi


Explanation

She mentioned that while research on screen time, overuse, and poor sleep exists, the hidden developmental issues are not adequately covered, indicating a need for deeper investigation into these invisible impacts on children.


How can we develop more sophisticated detection methods for AI-generated content as it becomes more seamless?

Speaker

Fifi Aleyda Yahya


Explanation

She noted that while we can currently spot AI-generated content, it will become more sophisticated and seamless in the near future, requiring advanced detection capabilities.


How can we develop locally relevant, culturally sensitive, contextual solutions for different countries rather than one-size-fits-all approaches?

Speaker

Yu Ping Chan


Explanation

She emphasized that global solutions developed in the West may not be appropriate for all contexts, particularly in developing countries, and local solutions need to be developed.


How can we expand the disinformation monitoring platform developed with Gambia to the continental level across Africa?

Speaker

Mactar Seck


Explanation

He mentioned they developed a platform with Gambia to monitor disinformation and fake news as a first step, and are exploring how to expand this across the African continent.


How can regulators exercise their power to switch off platforms collectively when dealing with harmful content?

Speaker

Salma Abbasi


Explanation

She noted that regulators have the power to switch off platforms (citing examples from Indonesia and Brazil) but often don’t exercise it, suggesting need for research on collective regulatory action.


How can we address biases in AI algorithms, particularly regarding gender discrimination and impacts on children and older persons?

Speaker

Gitanjali Sah


Explanation

She raised the important issue of biases in AI algorithms as another critical area that needs attention, particularly for vulnerable populations.


How can we better train caregivers beyond parents (kindergarten staff, household helpers) who spend time with children?

Speaker

Claudia Bustamante


Explanation

She pointed out that children are not always with their parents but with other caregivers who also need training on digital safety and critical thinking.


What are the implications and potential of comprehensive legislation giving individuals copyrights to their voice, face, and body, as being implemented in Denmark?

Speaker

Carol Constantine


Explanation

She shared information about Denmark’s new legislation and asked about its potential as a way forward and what hurdles might exist in implementing similar measures elsewhere.


How can we better integrate digital literacy and critical thinking about misinformation into school curricula globally?

Speaker

Dr. Eva Fell


Explanation

She emphasized the need to focus on schools as the primary venue for teaching children critical thinking skills about digital content and misinformation.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.