Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online
24 Jun 2025 09:00h - 10:30h
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online
Session at a glance
Summary
This discussion focused on protecting vulnerable groups online, bringing together parliamentarians, regulators, and advocacy experts from various countries to examine legislative and regulatory responses to digital harm. The panel explored how marginalized communities, particularly in the Global South, face unique online safety challenges that existing frameworks often fail to address adequately.
Neema Iyer from Uganda highlighted research showing that one in three African women experience online violence, often leading them to delete their digital presence due to lack of awareness about reporting mechanisms and fear of not being heard by authorities. She emphasized how intersecting inequalities, language barriers, and the normalization of abuse create complex challenges that narrow legislative frameworks cannot fully address. Raul Manuel from the Philippines discussed recent legislative measures including extraterritorial jurisdiction for child exploitation cases and expanded anti-violence bills, while noting the economic factors that drive children into exploitation.
Malaysian Deputy Minister Teo Nie Ching outlined her country’s holistic approach combining digital inclusion, robust legal frameworks, and multi-stakeholder collaboration, but acknowledged enforcement challenges with major platforms like Meta and Google refusing to comply with licensing requirements. Nighat Dad from Pakistan described the rise of AI-generated deepfake content and highlighted disparities in platform responses between Global North and South users, noting that non-public figures receive delayed or no response to abuse reports.
Arda Gerkens from the Netherlands discussed balancing human rights with content removal powers, revealing concerning trends of hybrid threats where terrorist groups target vulnerable children through mental health channels. Sandra Maximiano from Portugal introduced behavioral economics perspectives, explaining how cognitive biases affect online decision-making and can be leveraged to promote safer behaviors through design interventions and nudges.
The discussion revealed consensus that content takedowns alone are insufficient, with panelists advocating for greater platform transparency, algorithmic accountability, proactive design measures, and coordinated international responses. The session concluded with calls for global cooperation among regulators and recognition that protecting vulnerable groups online requires addressing both technological and human factors through multi-stakeholder collaboration.
Keypoints
## Major Discussion Points:
– **Unique challenges faced by marginalized communities in the Global South**: Discussion of intersecting inequalities, digital literacy gaps, language barriers, normalization of abuse, and how existing laws are often weaponized against the very groups they’re meant to protect, particularly women and marginalized communities.
– **Legislative and regulatory responses across different jurisdictions**: Panelists shared specific examples from the Philippines, Malaysia, Netherlands, Pakistan, and Portugal, highlighting both successful measures (like extraterritorial jurisdiction for child exploitation) and enforcement challenges, particularly with major tech platforms refusing to comply with local regulations.
– **Platform accountability beyond content takedowns**: Extensive discussion on the need for platforms to be more proactive, including algorithm transparency, improved reporting mechanisms, design friction to prevent harmful content sharing, and the importance of addressing root sources rather than just reactive content removal.
– **Behavioral economics and human-centered approaches**: Introduction of how cognitive biases affect online behavior and how regulators can use behavioral insights to nudge safer online practices, along with emphasis on addressing offline social structures and community-based solutions.
– **Need for coordinated global response**: Strong consensus that individual countries lack sufficient negotiating power with tech giants, leading to calls for regional blocs (like ASEAN) and international cooperation through networks like the Global Online Safety Regulators Network (GOSRN).
## Overall Purpose:
The discussion aimed to bring together diverse stakeholders (parliamentarians, regulators, and advocacy experts) to examine how to better protect vulnerable groups online, share experiences across different jurisdictions, and develop more targeted, inclusive, and enforceable policy responses to online harms.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s insights rather than debating. There was a shared sense of urgency about the challenges, but also cautious optimism about potential solutions. The tone became increasingly focused on practical cooperation and concrete next steps toward the end, culminating in calls for international coordination and the promotion of existing collaborative networks.
Speakers
**Speakers from the provided list:**
– **Alishah Shariff** – Moderator, works at Nominet (the .UK domain name registry)
– **Neema Iyer** – Founder of Policy, a feminist organization based in Kampala, Uganda; works on feminist digital rights issues including online violence, gender disinformation, and AI impact on women; member of Meta’s Women’s Safety Board
– **Raoul Danniel Abellar Manuel** – Elected member of Parliament in the Philippines, representing the Youth Party; former student government and union activist
– **Teo Nie Ching** – Deputy Minister of Communication, Malaysia; appointed in December 2022; previously served as Deputy Minister of Education in 2018; mother of three
– **Nighat Dad** – Founder of Digital Rights Foundation, a woman-led organization based in Pakistan; focuses on digital rights, gender justice, online safety, and freedom of expression; serves on the Meta Oversight Board
– **Arda Gerkens** – President of the regulatory body of online content, terrorist content, and child sex abuse material (ATKM) in the Netherlands; former member of Parliament (8 years) and senator (10 years)
– **Sandra Maximiano** – Chairwoman of ANACOM, the Portuguese National Authority for Communications; digital service coordinator; economist specialized in behavioral and experimental economics
– **Anusha Rahman Khan** – Former Minister for Information Technology and Telecommunications; enacted cyber crime law in 2016; currently chairs standing committee on information technology
– **Andrew Campling** – Runs a consultancy; trustee of the Internet Watch Foundation
– **Audience** – General audience member asking questions
**Additional speakers:**
– **John Kiariye** – From Kenya; made comments about human-centered design and community-based approaches
Full session report
# Protecting Vulnerable Groups Online: A Multi-Stakeholder Discussion on Digital Safety and Platform Accountability
## Executive Summary
This comprehensive discussion brought together parliamentarians, regulators, and digital rights advocates from across the globe to examine the complex challenges of protecting vulnerable groups in digital spaces. Moderated by Alishah Shariff from Nominet (.UK domain name registry), the panel featured diverse perspectives from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya, highlighting both the universal nature of online harm and the unique contextual challenges faced by different regions.
The conversation revealed that while online platforms have transformed communication and access to information, they have simultaneously created new vectors for harm that disproportionately affect marginalised communities, particularly women and children. Key challenges identified included the inadequacy of reactive content moderation, geographic inequalities in platform responses, the rise of AI-generated harmful content, and the weaponisation of protective legislation against the very groups it aims to protect. The discussion moved beyond traditional approaches to explore innovative solutions rooted in behavioural economics, community-based interventions, and coordinated international responses.
## Opening Context and Participant Introductions
The session, titled “Click With Care, Protecting Vulnerable Groups Online,” was part of the Internet Governance Forum (IGF) and featured interpretation services in English, Spanish, and French. Participants represented a diverse range of expertise and geographic perspectives:
– **Neema Iyer** from Uganda, representing Pollicy and speaking from her experience in digital rights advocacy
– **Raoul Danniel Abellar Manuel**, representing the Youth Party in the Philippine Parliament
– **Deputy Minister Teo Nie Ching** from Malaysia’s Ministry of Communications
– **Anusha Rahman Khan**, former Minister for Information Technology and Telecommunications in Pakistan (served for five years)
– **Arda Gerkens**, President of ATKM (Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse Material) in the Netherlands
– **Sandra Maximiano**, an economist specialized in behavioral and experimental economics from ANACOM (Portuguese National Authority for Communications)
– **John Kiariye** from Kenya, representing community-based perspectives
## Research Findings on Online Violence Against Women and Children
### Stark Statistics from Africa
Neema Iyer opened the substantive discussion with sobering research findings that framed the entire conversation: “One in three women across Africa experience online violence, and many of them end up deleting their digital presence because they don’t have adequate support systems and they feel like they’re not going to be heard by authorities.”
This statistic illuminated a broader pattern of digital exclusion, where those who could benefit most from online participation are driven away by harassment and abuse. Iyer explained how intersecting inequalities create complex barriers to digital safety: “There are large gaps in digital literacy and access, and platforms often don’t prioritise smaller markets or local languages.”
### The Weaponisation of Protective Laws
Perhaps most troubling was Iyer’s observation about how protective legislation can be turned against its intended beneficiaries: “The laws that do exist, especially in our context, have actually been weaponised against women and marginalised groups. So many of these cybercrime laws or data protection laws have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.”
This paradox challenges fundamental assumptions about the relationship between legislation and protection, suggesting that legal frameworks alone are insufficient without proper implementation and safeguards against misuse.
## Country-Specific Legislative and Regulatory Approaches
### The Philippines: Comprehensive Legal Framework
Manuel outlined several legislative initiatives demonstrating the Philippines’ comprehensive approach to online safety. The country passed Republic Act 11930, addressing online sexual abuse of children, and the House of Representatives approved an expanded anti-violence bill that “defines psychological violence through electronic devices as violence against women.”
Additionally, amendments to the Safe Spaces Act set higher standards for government officials, recognising their particular responsibility in online spaces. However, Manuel highlighted enforcement challenges: “Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence.”
The scale of internet usage in the Philippines adds urgency to these efforts: “An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet,” Manuel noted, emphasising the significant exposure to potential online harms.
### Malaysia: Multi-Faceted National Strategy
Deputy Minister Nie Ching outlined Malaysia’s comprehensive approach, which combines legislative updates, platform regulation, and extensive public education. After 26 years, Malaysia amended its Communication and Multimedia Act, increasing penalties for child sexual abuse material and grooming.
Malaysia developed a code of conduct for social media platforms with over 8 million users and established “900 national information dissemination centres” alongside a “national internet safety campaign targeting 10,000 schools.” The campaign uses a modular approach for different age groups, recognising that safety education must be age-appropriate.
However, significant enforcement challenges remain. Nie Ching revealed that while “only X, TikTok, and Telegram have applied for licenses” under the new framework, “major platforms like Meta and Google have not applied for licenses.” This resistance led to a crucial insight: “Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN.”
### Pakistan: Balancing Protection and Rights
Anusha Rahman Khan, who served as Pakistan’s Minister for Information Technology and Telecommunications for five years, enacted Pakistan’s cyber crime law in 2016, introducing “28 new penalties criminalising violations of natural person dignity.”
Khan emphasised the ongoing challenges in balancing commercial interests with protection needs: “Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination.”
### The Netherlands: Addressing Hybrid Threats
Arda Gerkens introduced the concept of hybrid threats that combine multiple forms of online harm. Her organisation has unique powers to identify and remove terrorist content and child sexual abuse material, but faces increasingly complex challenges as different forms of harmful content become intertwined.
“We see more and more hybridisation of these types of content mixed together,” Gerkens explained. “We’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or extremist groups are actually targeting vulnerable kids.”
This hybridisation represents a fundamental challenge to traditional regulatory approaches that treat different forms of harm in isolation. Gerkens noted that terrorist groups are “increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion.”
## Platform Accountability and Geographic Inequalities
### Inadequate Response Systems
The discussion revealed significant problems with current platform accountability mechanisms. Nie Ching highlighted practical limitations: “Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal.”
She provided a specific example involving “Dato Lee Chong Wei,” Malaysia’s famous badminton player, whose image was used in scam posts. Despite his verified status, removing the fraudulent content required regulatory intervention rather than effective platform mechanisms.
### Geographic Disparities in Platform Response
Nighat Dad from Pakistan’s Digital Rights Foundation, which has handled over 20,000 complaints since 2016, highlighted stark inequalities in platform responses: “Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment.”
This disparity is exacerbated by recent changes in platform policies. Dad noted that Meta’s scaling back of proactive enforcement systems “shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only.”
### The Rise of AI-Generated Harm
Dad also highlighted an emerging threat that exemplifies how technological advancement can amplify harm: “We are seeing a rise of AI-generated deepfake content, causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide.”
This technology democratises the creation of sophisticated abuse material while making it more difficult for victims to prove the falsity of harmful content, representing a qualitative shift in the nature of online harm.
## Behavioural Economics and Human-Centred Approaches
### Understanding Cognitive Vulnerabilities
Sandra Maximiano introduced a novel analytical framework through behavioural economics. “Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behaviour and decision-making,” she explained.
Maximiano emphasised that “vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design.” This insight suggests that effective protection requires understanding not just what harms occur, but why people are susceptible to them.
The potential for both exploitation and protection through behavioural insights became clear: “AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation.” However, the same understanding can be used positively through “better user interface design, nudging safe behaviour, and using social norms messaging.”
### Community-Based Solutions
John Kiariye from Kenya introduced a crucial human-centred perspective: “The offenders are human. The victims are humans. If we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully.”
Kiariye advocated for leveraging existing social structures: “Schools, clubs, and family units to empower victims and prevent online abuse before it occurs.” This approach recognises that online behaviour is shaped by offline social structures and that effective prevention requires community-level interventions.
## Areas of Consensus and Disagreement
### Strong Consensus on Platform Reform
Despite diverse backgrounds, speakers demonstrated remarkable consensus on the inadequacy of current platform accountability mechanisms. All agreed that transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses are essential.
### International Coordination is Essential
Government representatives from Malaysia, the Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, leading to growing support for coordinated international or regional approaches.
### Key Disagreement: Privacy Versus Safety
The most significant disagreement emerged during audience questions about age verification technologies. Andrew Campling from the audience advocated for “privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms,” citing the statistic that “300 million children annually are victims of technology-facilitated sexual abuse.”
However, Iyer strongly opposed such measures: “I think absolutely not… it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place… people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.”
## International Cooperation and Future Directions
### Regional Approaches to Global Challenges
The discussion revealed growing recognition that effective platform regulation requires coordinated international action while respecting cultural differences. Nie Ching advocated for regional approaches: “Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches.”
Gerkens mentioned the existence of the Global Online Safety Regulators Network and invited participation, representing an attempt to share best practices across jurisdictions.
### Addressing Root Causes
Manuel introduced crucial economic dimensions: “Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material.” This observation highlights how online harms often reflect offline inequalities and vulnerabilities.
## Conclusion
This comprehensive discussion revealed both the complexity of protecting vulnerable groups online and the potential for innovative, collaborative solutions. The conversation demonstrated growing sophistication in understanding online harm, moving from reactive content removal toward proactive prevention and addressing root causes.
Key insights included the recognition that online safety is fundamentally a human challenge requiring understanding of psychology, economics, and social structures alongside technical solutions. The emphasis on international coordination, cultural sensitivity, and multi-stakeholder collaboration suggests a maturing approach to online safety policy.
However, significant challenges remain, from platform resistance to enforcement difficulties to fundamental tensions between privacy and safety. Success will depend on sustained commitment to collaborative solutions that are both effective and respectful of fundamental rights and cultural differences across diverse global contexts.
Session transcript
Alishah Shariff: ♪♪ ♪♪ ♪♪ Good morning, everyone, and welcome to today’s session, Click With Care, Protecting Vulnerable Groups Online. I’m delighted you’re able to join us. I know there were some travel difficulties getting in this morning, so thank you for being here, and thank you also to our esteemed panelists for joining us today. My name is Alicia, and I work at Nominet, the .UK domain name registry, and I’ll be moderating today’s panel. Just a bit of housekeeping before we begin. You’ll have interpretation in your headphones in English, Spanish, and French, and when we open the floor to interventions and questions, you can ask your question by going to the microphone to my left and your right. So it’s a pleasure to chair today’s session, which brings together a diverse panel of parliamentarians, regulators, and advocacy experts to discuss a critical issue, which is how do we protect vulnerable groups online. We live in an increasingly digital world, which offers opportunities for connection, learning, and growth. But the digital world also brings with it risks and downsides, which are often felt more acutely by vulnerable groups, including children, individuals with disabilities, and others of marginalized communities, amongst others. The consequences of harm faced online can have a ripple effect into real lives, causing distress, harm, and isolation. The challenge of online harm has prompted a range of legislative and regulatory responses, as well as proactive and reactive approaches, and today’s session will enable us to better understand some of these across a range of geographies and contexts. I hope that by the end of the session, we’ll get a sense of how we can work towards a more targeted inclusive and enforceable policy response to online harms. I’ll now hand over to each member of our esteemed panel to briefly introduce themselves. So I think we’ll start with Nima.
Neema Iyer: Oh, super. Hi everyone. Good morning and thank you so much for joining us here. My name is Nima Iyer and I am the founder of Policy. Policy is a feminist organization based in Kampala, Uganda, but we work all across the continent and we are very interested in any issues related to feminist digital rights. So this could be about online violence, gender disinformation, the impact of AI on women, and any such topics. And yeah, we do a lot of research on these topics. We also work very closely in local communities and of course we do advocacy work, which is part of why we are here as well. Thank you. Over to you.
Alishah Shariff: Thank you, Nima. Next we’ll hear from Raul.
Raoul Danniel Abellar Manuel: Hello, good morning everyone. I am Raul Manuel. You can call me Raul. I am an elected member of Parliament in the Philippines, representing the Youth Party. And prior to being a part of the Youth Party and of the Philippine Parliament, I was active in the student government and the student union. That’s why you have been paying close attention to this issue of online freedoms and protections. Thank you.
Alishah Shariff: Thank you, Raul. And next we have Your Excellency, Tony.
Raoul Danniel Abellar Manuel: Hello, good morning everyone. Thank you, Alicia, for the introduction. My name is Ni Ching. I’m from Malaysia. I’m currently the Deputy Minister of Communication. I was appointed to this office in December 2022. However, in the year 2018, I also had this opportunity to serve in the Ministry of Education as the Deputy Minister as well. I’m currently a mother of three, so protecting children and our minors on internet is a topic that is very, very close to my heart. And under the Ministry of Communications, we have a very important agency that is called MCMC, Malaysia Communication and Multimedia Commission, who acts as a regulator for the content moderation, platform providers, etc. Looking forward to this fruitful discussion.
Alishah Shariff: Thank you. And next we have Nighat.
Nighat Dad: Good morning everyone. My name is Nighat, and I’m a founder of Digital Rights Foundation, an organization, a woman-led organization based in Pakistan, and we are committed to advance digital rights with a particular focus on gender justice, online safety, and freedom of expression. Our work is grounded in both direct support and systemic change. We have a digital security helpline which provides legal advice, digital security assistance, and psychosocial support to victims and survivors of online abuse, and has a survivor-centered approach. And we also conduct in-depth research, build digital literacy and safety tools, and engage in policy advocacy conversations at the national, regional, and international levels.
Alishah Shariff: level. Thank you. And Arda, I’m glad you were able to make it, and thanks for joining. Thank you.
Arda Gerkens: Thank you very much, and excuse me for being late, the train was so much delayed. So, my name is Arda Gerkens, I am the president of the regulatory body of online content, terrorist content, and child sex abuse material, ATKM, it’s the abbreviation. I used to be a member of Parliament for eight years, and a senator for ten years, so I bring some political experience too. My organization is really there to identify harmful content on terrorist content and child sex abuse material, and is able to remove that content or have it removed, and if not, we’ll find the ones who is not complying with our regulation. We’re kind of unique in the field, I think we’re the first regulator. at least, as far as I know, who has that special right to dive into that content. And yeah, looking forward to the discussion today.
Alishah Shariff: Great, thank you. And we have one panelist who’s still on their way here. So when they join, we should also have Sandra Maximiano, who’s the president of ANACOM Portugal. So hopefully she’ll be able to join us shortly. So the way this will work today is we have some questions for our panelists that they will speak to, followed by a quick fire round, and then we’ll open out the floor for your interventions and questions. So without further ado, I think my first question is for Nima. So Nima, what are some of the unique online safety challenges faced by marginalized communities, particularly in the Global South, that may not be adequately addressed in existing legislative frameworks?
Neema Iyer: Thank you so much for that question. So first, I just want to start by framing some of the research that we’ve already done on this topic. So, for example, we did research with 3,000 women across Africa to understand their online experiences, and we found out that about one in three women had experienced some form of online violence, and this basically led them to deleting their online identity, because many of them were not aware of reporting mechanisms, and they also felt that if they went to any authorities that they would not be listened to. A second study that we did is a social media analysis of women politicians during the Ugandan 2021 election. We wanted to see what was the experience like for women politicians, and we found that they are often targets of sexist and sexualized abuse. But more importantly, the fear of the abuse on online spaces meant that many women politicians did not actually have online profiles or chose not to exist and to participate in the online sphere. And in the third research we did is on the impact of AI on women. So we often tend to think of, when we think of care, we think of social media, but more importantly thinking of how does AI impact women in, you know, that may be marginalized in some way, and we found out there are grave issues of under-representation and data bias. There’s algorithmic discrimination. AI makes it very possible for digital surveillance and censorship. There’s labor exploitation, and also there’s a threat to low-wage jobs, which often tend to be occupied by women. So I just wanted to frame that research first and then talk more about the question, which is, what is unique about this group? And the first one is that there are intersecting inequalities, so there are large gaps in digital literacy and digital access, for example. And so when you are trying, both as a platform and a civil society or a government, you have to take into account the fact that there are some women who have absolutely no access, have no digital skills, and you know, this is across the spectrum. So how do you tailor interventions that can meet all these different people who exist in all these different inequalities? Then in our context, for example, in Uganda, there are about 50 languages that are spoken, in Uganda alone, not considering the whole continent. And because these are smaller countries, they don’t have a huge market share, you know, on these online platforms. They’re often not prioritized. And so how do you develop interventions? How do you make safety mechanisms when, you know, you don’t have these languages on your platform? Another one I want to talk about is the normalization of abuse, which is, you can see in real life and in online spaces, that are both cultural and a result of platform inaction. So in regular life, you go on the street, you get harassed, you go to the police, they don’t do anything. That is replicated in online platforms, where you face this harassment, you reach out for recourse on the platforms, and there is platform inaction. So basically, in that way, this kind of online abuse is normalized. And then there’s the invisibility in platform governance processes. Of course, this is an amazing venue where we can talk about these issues, but a lot of women, marginalized groups, are not in these rooms with us right now to talk about their experiences. And then lastly, I just want to talk about the fact that the laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them. That’s the reality that we live in. So the fact is that legislative frameworks are often too narrow. They, you know, they focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women. So for example, a law might address intimate image sharing, but it won’t, you know, it’ll ignore coordinated disinformation campaigns, for example, or it will ignore this ideological radicalization that’s happening to minors online. Or, you know, it won’t target specifically the design choices that platforms make, for example, like where, you know, they amplify violence or those kinds of things. So I think we really need to think broader about how we are legislating about online violence, and I’m really glad that this conversation is happening. So back to you.
Alishah Shariff: Thank you. And that was, I think there was so much in there from the kind of, you know, the sphere of abuse in online spaces to how different people feel and experience being marginalized, and then also how some of these kind of legislative measures and also policies can sometimes have an adverse effect, and really thinking about the context. But thinking about how we do kind of good regulation, we’ll turn now to Raul. So as a Member of Parliament, could you share recent legislative measures in the Philippines to address online exploitation of children and pending efforts to protect women, LGBTQI+, and other marginalized communities from online violence and threats?
Raoul Danniel Abellar Manuel: Yeah, thank you, Alicia. And before I proceed, I’d like to thank the IGF Secretariat for this opportunity to share our perspectives from the Parliament of the Philippines. In our case, we have been pushing for a vibrant debates and discourse to ensure that protections for marginalized and vulnerable groups do not come at the expense of sacrificing our basic and human rights. The Philippines right now, just for context, ranks as the number three as of February 2025 in terms of the daily time spent by citizens in using the internet. An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet, which is much higher than the global average of six hours and 38 minutes. So while this time can be spent to, you know, connect with friends, family, conduct research, do homework, this also exposes vulnerable groups, including young people, to different forms of violence and threats. For example, the Philippines, unfortunately, has been a hotspot of online sexual abuse and exploitation of children, and also the distribution and production of child sexual abuse and exploitation materials. So this is a problem that we have to acknowledge so that we can take proactive measures in addressing it. Second would be the electronic violence against women and their children, which we call E-Vow-C for short, and third, among the major forms of violence and threats online, would be harassment based on identity and belief. So I will briefly touch upon what we have been doing in Parliament to address these. First, when it comes to online sexual abuse and for exploitation of children, we recently had the Republic Act 11930. It is a law that lapsed on July 30, 2022, so it is kind of fresh, and aside from content taken one major component of this is the assertion of extraterritorial jurisdiction, which means the state shall exercise jurisdiction if the offense either commenced in our country, the Philippines, or if it was committed in another country by a Filipino citizen or a permanent resident against a citizen of the Philippines. Recognizing that the problem of online sexual abuse of children can happen not just in a single occasion, but it can be part of a coordinated network involving several hubs or locations. That’s why we really had to put this into law. When it comes to electronic violence against women and children, the House of Representatives, on its part, approved the expanded anti-violence bill. It defines psychological violence, including different forms, including electronic or ICP devices. The use of those devices can be considered, and it was defined to be part of violence against women. We did this in the House of Representatives, but since the Philippines is bicameral, that’s why we’re still waiting for the Senate to also speed up in its deliberations. Now, when it comes to online harassment based on identity and belief, we approved at the committee level so far amendments to the Safe Spaces Act, which sets a higher standard on government officials who may be promoting acts of sexual harassment through digital or social media platforms, like when they have speech that tends to discriminate those in the LGBT community. Finally, we have a pending bill in the House of Representatives, which seeks to criminalize the tagging of different groups, individuals, as state enemies, subversives, or even terrorists without much basis in such labeling. Recently, the Supreme Court adopted the term red tagging, which has been a source of harm and violence that transcends up into the physical world. That’s all for now, and I hope that this can be a source of discussions also on how we can really work together to address these online problems. Thank you.
Alishah Shariff: Thank you, Raul. I think that was really eye-opening, and there’s definitely lots happening in your legislative space, and I think it’s really nice that we have this mix of where you’ve got kind of slightly newer regulation and legislation, and also to hear from somebody later on who has experience of kind of enforcing this sort of regulation. So, moving from the Philippines to Malaysia, next I will turn to Her Excellency Chini. So, what is Malaysia’s core philosophy and overall strategy for protecting vulnerable groups in today’s complex digital environment, and how does Malaysia balance creating and enforcing laws and regulations with maintaining freedom of expression? Thank you.
Teo Nie Ching: Thank you, Alicia, for the questions. First of all, in Malaysia, we view online protection not just as a single action, but as a holistic ecosystem built on three core strategic trusts. The first one is empowerment through digital inclusion, and of course literacy. And the second will be protection to a robust and balanced legal framework, and third, support to a whole of society, multi-stakeholder collaboration. So, currently in Malaysia, our internet coverage has reached 98.7% of the populated area. So, internet coverage is, I think, pretty impressive. At the same time, we also set up more than 900 national information dissemination centres, which act as a community hub providing on-the-ground digital literacy training, especially to the seniors, to the women, to the youth, who may be more susceptible to online risks. And not only that, we also recently launched a national internet safety campaign, and our target is to actually enter 10,000 schools in Malaysia. That is our primary school, secondary school, and of course, we aim to enter the campus of the university as well, so that we can engage with the user. And this programme is not the usual public awareness campaign. However, we are more specific. We developed a modular approach which depends on the audience. For example, if their age is between seven to nine, then what type of content is more suitable for them, and what type of interactive action is actually we can design for them. So, for example, primary school, secondary school, we will be focusing on cyberbullying, and of course, to protect their own personal information, and then for the elder, we will teach them more, or share with them more about online scam, financial scam, etc. So, we believe that this is an approach whereby we need to go to the community, we need to engage them, we need to empower the community, so that we can raise their digital literacy. And of course, I think we also need to have a legal framework to protect our people, and it is very, very important for us to strike a balance between freedom of expression, but at the same time, also make sure this vulnerable group, they are actually protected by law. Last year, we have amended our act that is Communication and Multimedia Act, first time in the 26 years, whereby we have actually increased the penalty for dissemination of child sexual abuse material, CSAM, grooming, or sim communication through digital platforms, with heavier penalty when minor are involved. And then, at the same time, the amended law also grants the Commission, the Communication and Multimedia Commission, Malaysia Authority, to probably instruct the service provider to block or remove harmful content, enhancing platform accountability. At the same time, we also develop a code of conduct targeting the major social media platform with more than 8 million users in Malaysia. Malaysia is a country with about 35 million population, so when we use the benchmark of 8 million, that was roughly about 25% of our population. We are hoping that by imposing this licensing regime, we will be able to impose this code of conduct against the service provider, but as I mentioned yesterday, I would not say this is a very successful attempt because the licensing regime is supposed to be implemented since 1st of January this year, but however, two major platforms in Malaysia, i.e. Metai and also Google, until today has yet to come to apply for the license. So, I think the challenge faced by Malaysia maybe would be similar to many, many other countries as well. Malaysia alone, we don’t have sufficient negotiation power when we engage with tech giants like Metai and Google. So, how can we actually impose our standard over this platform to ensure that the harmful content, according to Malaysia context, can be removed instantly in a reasonable period of time has been quite challenging in Malaysia. We see that even though sometimes platforms would still cooperate with MCMC to remove certain harmful content, but it is always like the user or the scammer put it out and then MCMC, upon the request of MCMC, the content were taken down, but however, there is no permanent solution to stop all this harmful content from being put out on the social media, such as online gambling, such as scammer posts, etc. So, I think that’s it for now and looking forward to more questions.
Alishah Shariff: Thank you. I think that was a really good overview of how you can have both legislation and then a kind of voluntary code of conduct and some of the challenges that come with that in terms of how you are able to enforce it and also maybe towards the end you were getting to actually how do you prevent some of this stuff in the first place because obviously the takedowns are a reactive measure and there’s a bigger challenge here around how we prevent this sort of thing in the first place. So, we’ll now move to more of a focus on digital rights and we’ll turn to Nigat. So, at the Digital Rights Foundation, you lead efforts against online harassment and advocacy for privacy and freedom of expression. You’re also serving on the Metta Oversight Board. So, what gaps in terms of digital rights do you observe between the global south and the global north and what are your perspectives on platform accountability?
Nighat Dad: Yeah, thank you so much. So, at the Digital Rights Foundation, over the years, we have been witnessing the rise of digital surveillance, privacy violations, gender-based disinformations which is very targeted and now the disturbing rise of AI-generated deepfake content. Since 2016, through our digital security helpline, we have dealt with more than 20,000 complaints from hundreds of young women every month, female journalists, now more from women influencers and content creators, women politicians, scholars, and students. And this number is only to a digital security helpline which is being run by a NGO. This number is even higher when it goes to our federal investigation agency, Cybercrime Wing. And the people mostly who complain to us, they are being blackmailed, silenced, or driven offline by intimate images that they never consented to, some of which aren’t even real. In the last one and a half year, I would say we have seen this rise in deepfakes that have blurred the line between fact and fiction, but at the same time, we have seen that the harms are real in the offline space. It’s reputational damage, it’s emotional trauma, and in some cases, complete social isolation. And in worst cases, we have seen some women committing suicide. What’s even more alarming is how platforms respond to it, and as Honorable Minister mentioned that many platforms in our part of the world are really not accountable to the governments, and too often, survivors are forced to become investigators of their own harm, hunting down copies of content, plaguing it repeatedly, and navigating opaque reporting systems that offer little support and no urgency. And unfortunately, if they are not public figures, and if they are not politicians, the response is even more delayed, if it comes at all. And in my work at the Metal Oversight Board, the same patterns show up, just on a global scale. Last year, we reviewed two cases of deepfake intimate imagery, one involving a US public figure, a celebrity, and another involving a woman from India. And Meta responded quickly in the US jurisdiction, because media outlets had already reported on it, but in the Indian case, the image wasn’t even flagged or escalated, and it wasn’t added to the Meta’s media matching service until the Oversight Board raised it. And what we noticed as a board, that if the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines? So we pushed Meta, in our recommendations in case, to change its policy. We recommended that any AI generated intimate image should be treated as non-consensual by default, that harm should not have to be proven through news coverage, and we advise that these cases be governed under the adult sexual exploitation policy, not buried under bullying and harassment, because what’s at stake is not just tone, it’s bodily autonomy. And I think that one thing which is deeply concerning, that Meta has recently scaled back, like several other platforms. It’s proactive enforcement systems now focusing mostly on illegal or high severity cases while shifting the burden of content moderation onto users. That may sound like empowerment, but let me tell you that looks very different on ground. In South Asia, many users don’t know how to report. And even when they do, the systems are in English. They are not even in our regional languages. The processes are opaque, and the fear of backlash is very real. In India, for example, we have documented cases where women reporting abuse ended up being harassed further. That’s the same case in Pakistan. It’s not just by other users, but by the very mechanisms that are meant to protect them. And I’ll stop here, and we’ll add more to the policy level debate.
Alishah Shariff: Thank you. Thank you. I think there was so much in there. And I think what’s really coming through is that if we have this right to privacy and right to freedom of expression, that should be for all of us everywhere around the world. And the way that then we are treated when something does go wrong should also be equitable, because you can’t put it all on the individual to try and get all these images taken down. I think we’re definitely seeing a lot more on non-consensual intimate imagery abuse in the UK as well. And actually, the regulatory response and the legislative approach catching up with the real harm, there’s a big gap still. So thank you so much. And so next, we’ll turn to Ada. And so, Ada, you’re the president of the Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse material in the Netherlands. And that regulates online content by ordering the removal of terrorist and CSAM content. So how do you strike a balance between online rights, the promotion of a safer online environment, and law enforcement? And what are some of your areas of concern?
Arda Gerkens: Yes, thank you. Thank you very much for inviting me on this panel. To address one of the last points in your question, how do we deal with law enforcement, we basically only target the content. So we’re not looking for perpetrators, ones who is uploading it or downloading it. It’s not of our interest. But of course, certainly when it’s terrorist content, but also with child sexual abuse material, when there’s anything that is worried for us worrying, then we’ll report it to law enforcement so they can act upon. And also, we have something what’s called deconfliction, just to make sure that we’re not taking that material in areas where police or other services are already investigating to make sure that we don’t harm their investigation. So far, that hasn’t happened yet. So I think we’re doing a good job. The other question is about, how do you balance human rights? And of course, with the powers we have, which is a very important power, I think, taking down or at least ordering the take down of material comes great responsibility. And definitely, when you look at the field of terrorism, it can easily be abused and harm freedom of speech, right? So we need to see how we can balance that. Well, first of all, we have legislation. So it’s not we have to hold the standard for this legislation when we send out removal orders. But the legislation is quite broad and sometimes vague. For instance, one of the reasons of addressing something as terrorist content is the glorifying of an attack. Well, what’s glorifying? So what we’re doing at the moment is, together with other European countries, as this legislation is European legislation, we are now clarifying that law to see, so what do we think all of us is glorifying? What is a call to action? So that we can refine that and make it quite clear also to the outside world, how do we assess and the reports do we get? And what threshold does it meet before we send out a removal order? And then again, of course, we can also give that to the platform saying, listen, if it meets these and these criteria, then maybe you should take it down before you send your removal order. That would be much better than us for sending removal orders. So this is on terrorist content. And as you can imagine, child sexual abuse material, that’s quite clear. There’s no debate about it. There shouldn’t be a debate about it. And I don’t think there’s any way of freedom of speech or any other human rights except for the right of the child that’s involved. But however, if you look at the removal of this type of content, you’ll see that on terrorist content, the majority of the material we find will be on a platform. But for child sex abuse material, unfortunately, as the Philippines has their downside, we have our downside that the Netherlands is a notorious hosting country for this kind of material. So we’re basically focusing our actions on hosting companies. Now, some of them are really bad actors. So this kind of imagery would not be the only bad things on their platforms. But there are also very many legit websites as well. So we need to make sure that we’re proportionate in our actions. We have really strong powers. We are able to even pull the plug of the internet, almost, let’s say, that way. Or we could even make sure that access profilers block the access. But if you do such a thing, you need to make sure that you’re not harming innocent parties or companies involved. So again, we need to be very precise and very well know what we’re doing. And so basically, for all this work, we engage a lot with industry to know the techniques. I think it was Paul who said here yesterday, for politicians, it’s very important to know the technical aspects of the online world. So is it for us. So we know a lot. We don’t know everything. There are lots of people who are much smarter than we are. So we engage with them. And we have an advisory board who would help us to make the difficult decisions. But we also engage with civil society to make sure that we uphold all these rights which are there to be able to balance it. And in the end, of course, it’s our decision. But we have to be able to explain it to the public, to you, why did we take that position? And did we look at the downside and the effects of it? And yeah, so that’s how we’re doing it. And it’s a very, I think, very interesting job. Now, on the matter of concerns of vulnerable groups, something I would like to address is something that we are currently seeing happening in the space of what used to be, I think, terrorism. I say used to be because terrorist actions used to be quite clear cut. It’s either right wing terrorism. Look at the Christchurch shooting. Or it’s Judaism. Many of the attacks are well known from that. But we see more and more hybridization of these types of content mixed together with other content. So recently, we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online. Can I say it that way? Because we find that these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids. For instance, create a telegram channel where kids can talk about their mental health state, eating disorders. They groom information out of them. And with that information, they then extort them. And they let them do horrible things like carving their bodies or making sexual images, which are then again spread. And we can see that this kind of material is radicalizing the kids very swiftly. And recently in Europe, we had some very young kids who were at the verge of committing attacks. And so what we see now is that this is exhilarating in a very fast pace. And as our focus is on terrorism and child sex abuse, we cannot speak on eating disorders or mental health problems. But we know here at the table, too, there are lots of organizations who address these problems. But they’re probably not aware of these things happening. It’s all in the dark. And I think, again, if you talk about protection of vulnerable groups online, we need to bring these things to light. Like you basically said, the one case is brought to light by media. The other case is not brought to light by media. I think it’s up to us to bring it to light that these things are happening online. So at least the awareness is out there for parents and other caregivers to take care of the kids. But also for adults, that if somebody finally is able to speak about what’s happening, you are there to help them and support them. But yeah, we need much more to be done here as a coordinated approach to tackle this problem.
Alishah Shariff: Thank you, Ada. I think there was a lot in there in terms of proportionality and having a position that you can defend that is kind of balanced. I think this point on hybrid kind of threats is also really interesting. It’s something I haven’t heard before personally. And yeah, I think how you have a response that works across the whole system when these threats are hybrid and blended is really tricky, but also important to get right because there’s a lot at stake. So thank you. So next we’ll turn to Sandra. And if you want to just do a short introduction, that would be great. And then I’ll get to your question. OK.
Sandra Maximiano: So I’m Sandra Maximiano. I’m a chairwoman of ANACOM, the Portuguese National Authority for Communications. And at the moment, also, ANACOM deals with electronic communications, postal communications. But it’s also the digital service coordinator, so also on the digital matters, and also responsible for online terrorism and all these new issues, and also some competences under AI. So quite a broad authority. I’m an economist and specialized in behavioral and experimental economics. Thank you. So bringing together those two roles, I guess, as a regulator and also a behavioral and experimental economist, can you explain what behavioral economics is about and how it can be used to protect vulnerable groups online? So let me first say that if we will be rational human beings, we will probably need to care so much about safety and have a big concern, because we will be super rational and be able to understand what is good and bad and immediately react upon that. But we are not. So behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions. And they make decisions not like machines. They don’t really maximize all the time their welfare, but they are affected by social issues, by social pressure, by their own emotions. And we all are affected. So, we use shortcoms, we call heuristics to make decisions, and we have a ton of cognitive bias. And this cognitive bias actually, they significantly influence how users interact and behave in an online context, and we have to have that into account. For instance, I can give you some very quick examples, like confirmation bias. Users may seek out information or sources that align with their existing beliefs, leading to echo chambers on social media platforms. This can, of course, perpetuate misinformation, stereotypes, and false beliefs, and limit exposure to diverse perspectives. Another one, overconfidence bias. Users may overestimate their online security knowledge, leading to risky behaviors, such as using weak passwords or ignoring security updates. Optimism bias. So we underestimate the risks of online scams or data breaches, believing that they are less likely to be targeted than others, which can lead to inadequate precautions. And on top of that, so we all suffer from this bias, but some groups suffer even more. So if we are thinking about children, we are thinking about some disabled groups, some people with mental health problems, they have, of course, this bias influencing their decision even more. And we as regulators, we have to take that into account. So we should, of course, be aware how this bias are used to explore the decision-making process online, and we have to fight with the same weapons. Basically, we have to make usage of this bias and try to make people do or take good decisions. So we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions. AI can also increase the economic value of this cognitive bias. And why? Because AI makes firms, makes organizations to use even more, to exploit this cognitive bias and expose people even to higher risks. So we have to be aware of that. And also, AI systems do not need to exploit vulnerabilities to cause significant harm to vulnerable groups. Systems that, for instance, they merely overlook the vulnerabilities of these groups could potentially cause significant harm. So I can give you an example. Individuals with autism spectrum disorder, they often struggle with understanding no literal speech, such as irony or metaphor, due to impairments in socially understanding and recognizing the speaker’s communicative intention. In recent years, chatbots have become very popular to engage with and train individuals with autism to enhance their social skills. If a chatbot is trained solely on a database of typical adult conversations, it may incorporate the elements as jokes and metaphors. And individuals with autism may interpret them literally and act upon, potentially leading to significant harm. So we have to be aware. As regulators, we really have to be aware with intentional and non-intentional harms that can cause to individuals. But as I said, we can also use this bias to make individuals make good decisions to protect vulnerable groups online. So behavioral economics can be used to enhance online protections for vulnerable groups, such as children, disabled users, and marginalized communities in many ways. So we can better design user interfaces. So websites and applications can be designed with user-friendly interfaces that consider the cognitive load of users. Nudge safe behavior. Platforms can implement nudges that guide users to hard, safer online behaviors. And presenting information about online risks in a clear and relatable way that can improve understanding and compliance. So this is particularly important. For instance, regarding, just to finish, regarding cyberbullying, behavioral economics can also play a significant role in protecting children from cyberbullying. So for instance, we can apply its principles to education and awareness campaigns. Again, framing information in a way that makes it very clear and very relatable for users. Using social norms. Social norms can be really a problem because people feeling the pressure to follow what others do, for instance, and this is a really preoccupation related to the online challenges that many children engage and put them at risk. But at the same time, we can use social norms messaging and, for instance, highlight positive behaviors and peer support through campaigns can shift perceptions around cyberbullying. So by emphasizing that most children are not engaging in cyberbullying behavior, it can create a social norm against it. So this is the point I want to make, is basically we have to understand all this behavioral bias that are putting our children, and this is just an example, but putting all of us at risk online. but we can use the same weapons to make it a safer behavior. So you really have to understand and then playing with the same weapons as regulators. Using nudge, encourage reporting, nudges that remind children of the importance of reporting, bullying can increase reporting rates and that there are studies that confirm that programs can be designed to teach children how to respond to cyberbullying effectively and behavioral economics can inform the design of these programs. So incentivize positive online behavior also can test different incentives, gamification, reward systems, schools and online platforms can implement reward systems that recognize and incentivize positive online behavior and this can be tested using experimental tools. So this is just an example and there are much, much more. Online platforms can adopt clear policies against cyberbullying and communicate this effectively to users. Again behavioral economics can help in framing these policies to highlight the collective responsibility of users to maintain a safe online space. So this is the point again that I want to make and this is an example. The same can be applied to understand algorithm discrimination, how does it work, how the bias increase this discrimination, but at the same time how can we use nudge and behavioral to fight those bias that are perpetuated in some algorithms. So the message I want to leave is that especially if you are a regulator, a policymaker, be aware of behavioral insights. People are using it. to make others behave in a way they want, firms do it a lot to sell more, marketing strategies are all, they all make use of behavioral insights, so we as regulators have to use the same weapons, but for another purpose, with another goal in mind. That’s it.
Alishah Shariff: Thank you Sandra, I think that was, yeah, I think it’s great to have a different perspective on the issue and I’ve never really heard anyone come at this from a behavioral kind of bias perspective, so thank you so much for that and I think, you know, how do we actually turn this on its head and use gamification and use these things to kind of incentivize slightly different behavior is a really interesting question. I think something we’ve come to quite a lot in discussions has been around the role of platforms, so I have just a quick-fire question for each of our panelists before we open to the floor, and so that question is, what forms of accountability beyond content takedowns should platforms adopt to protect marginalized users? So I might start with Ada and go from this side.
Arda Gerkens: Thank you very much for that question. Well, first of all, I think we need to understand that the platforms do a lot already. I think we should start from the positive side, right, because there’s a lot of things we can say about the platforms, but they do have a lot of effort in there. The effort is there when it doesn’t cost them any money, but when it comes to the revenue, then it’s getting, you know, to be difficult, and I think there’s one thing it is indeed to take down content, but there’s a lot of things that you can do with the algorithms by bringing extra attention to some of the material that they have or to lower it in the attention, and here I think there’s still a big chance because it’s… A piece of content in itself is not harmful. It could be harmful, but it’s only viewed by three, four people, persons, then it’s not a problem, but once it spreads and it’s been into the eyes of millions, then there is where it gets harmful, but again, when it’s spreading that fast, that’s also the way the system works, right, because it’s there because you want to be able to spread it again, get more attention, and therefore get more viewers, and more viewers means more advertisement, means more money for the platforms, so I think if we should start a debate with them, I would really like to speak with them on how they are having that policy around moderation, or moderation in the sense of taking material lower into their feeds or bringing them up higher.
Alishah Shariff: Thanks, Ada. I think next we’ll turn to Nigat, and do you want me to repeat the question? No, you’re good.
Nighat Dad: I think just platforms are doing a lot, some of the platforms, not all, but I guess we should look at the positive side of some platforms where they have some oversight mechanisms that are still working, and gave some good decisions and recommendations, and which actually improved their policies, but at the same time I think we really need to see what to do with the platforms that are still thriving in our jurisdictions but absolutely have no accountability. And they do not have their trust and safety teams any longer. They don’t have human rights teams. I’m talking about X here. I don’t think that anyone in a room has any point of contact with X in terms of escalating content, in terms of the disinformation that thrives on this platform. And it’s very interesting for me to see, for a number of years, that in different jurisdictions, when we talk about platforms, in the North, it’s easier to say that we should move on to other alternative platforms, like Mastodon or Blue Sky. But the problem in our jurisdictions is that user base is not that digital literate. And they are very comfortable with the platforms that they already have. Not the civil society has access to these platforms, neither the government. So I’m very concerned. What are we thinking about those platforms? But at the same time, there are platforms that actually listen to all government requests and take down number is very higher. And that’s where many have mentioned necessity and proportionality. And I don’t think many jurisdictions are actually respecting that. So I think we really need to see what are the oversight or accountability mechanisms are out there. And what different actors are doing. Just government, and government is making policy and regulation. But what that regulation looks like, does it really respect UN guiding principles or international human rights, human rights law, when it comes to content moderation or algorithmic transparency? At the same time, what other actors are doing? Platforms at the moment have much more power in our part of the world. We do not have Digital Services Act. But our governments are coming up with its own kind of regulation, which might not be as ideal as DSA, and which might not have that kind of power of enforcement that DSA has. So we really need to see what kind of precedents we are setting.
Alishah Shariff: Thank you. I think from our first two speakers, there’s definitely something coming through around transparency of what the platforms share with us, and whether that’s how their content moderation processes work, or other things. And then also a point around accountability. But also, as you said, Nika, just designing this new regulation, we’ve got to also take into account privacy, freedom of expression, getting the balance right, and then also being able to enforce effectively. So yeah, next I will turn to you, Chinni.
Teo Nie Ching: Yeah, a few things I would like to highlight. First of all, I would like to see the platforms to improve their report mechanism, the built-in report mechanism. Because my experience in Malaysia would be sometimes even public figure, prominent figure, such as Dato Lee Chong Wei, a very famous badminton player from Malaysia. They are scammers who are using his video, his photo, to create scam-related posts. And however, Dato Lee Chong Wei, even though he has a Facebook account with the Blue Tick verification badge, he himself lodging report through the built-in report mechanism is not going to be helpful. He himself need to compile all the link, send to me, send to MCMC, and then we need to forward it to Meta for the scam-related content to be taken down. So I think, first of all, the self-reporting, built-in report mechanism is not functioning. And that is actually putting a heavier burden on the regulator to actually do the content moderation job on behalf of the platform. I do not think that is fair. Second, we talk about transparency. So even though the scam-related posts are being taken down, but what actions are taken by the platform against the scammer? Against the scammer? I think that is the question we need to pose to the platform provider. And I’m hoping to get an answer from them. How much advertising revenue they are collecting from Malaysia each year? Do we know? I don’t have the figure. How much advertising revenue they collect for ASEAN collectively? We never have the figure. But for me, if you only take down the scam-related posts, it’s not sufficient. Because I need to know what type of action is being taken by the platform against the person who sponsored the post. Shouldn’t that person be held responsible as well? And because we don’t have that type of transparency, it’s very difficult for us to have the platform accountable. And then, again, I would like to add a little bit more on the algorithm part. Because I think algorithm is very, very powerful. However, platform, when they design the algorithm, their only purpose is to make the platform more sticky, so that its user will spend more time on that platform. But however, I think it’s time for the public, for the general public, for the civil society to also have a say to design the algorithm, so that we can so-called practice information diet, as proposed by one of my favorite author, Yuval Harari, that we also need to make sure that the information consumed by the user, by the social media user every day, actually healthy content, and not just whatever content they like. Because I think that can be very, very dangerous.
Alishah Shariff: Thank you. Yeah, absolutely. Thank you. Yeah. Yeah, I think the incentives of these platforms, and understanding the kind of stickiness point with algorithmic promotion, and, yeah, kind of the advertising revenues is another whole piece of the puzzle that we could have a separate discussion about. But thank you. And next, I’ll turn to Raul.
Raoul Danniel Abellar Manuel: Yes, thank you. Actually, before this month of June, in the House of Representatives, we have had a series of hearings by three House committees, namely the Committee on Information and Communications Technology, Committee on Public Order, and Committee on Public Information. And the topic of takedowns has been discussed. And in the fifth, or the final hearing that we’ve had so far, the government and representatives from META reported to the public hearing that they had this non-written agreement that would enable the government to send the requests for content takedowns to META. And our reaction at that time was, without any written basis or any law that would explicitly set the standards as to what content can be taken down and what should just stay online, then it will be a slippery slope when it comes to using content takedown as a primary approach when it comes to ensuring that our online spaces are safe. It can be having decisions just being done in the shadows and people not being aware or being made knowledgeable about the basis for takedowns. That’s why, beyond takedowns, we really assert that platforms have a major responsibility. For example, when they already can monitor notable sources of content that is harmful to children, women, LGBT, and other marginalized groups, may it be bullying, hate speech, indiscriminate tagging, or those posts promoting scams to Filipinos, or those posts promoting hate speech to Filipinos, then platforms should proactively report those sources to government. And also, platforms should work with independent media and digital coalitions so that, aside from just going after each content, because that would also be very tedious and laborious, we should also focus on the sources, to promote a certain narrative or discourse, so that we can not just be reactive in our approach. Being proactive would be the better way to go. So that’s my piece. Thank you.
Alishah Shariff: Thank you so much, Raul. I think that’s really interesting on kind of knowing the sources. And also, you touched on a really important thing on independent media, which obviously is in decline in a lot of the places where we live, sadly. We’ll go to Nima next.
Neema Iyer: Thank you. So I want to shift gears a bit and talk more about actual design of platforms. So I am a member of Meta’s. as Women’s Safety Board. And sometimes they bring us in on design decisions that they make, and echoing some of the opinions of my colleagues at the other end of the table, that it’s really difficult work. It is so difficult to make these little design choices on the platform that impact user behavior. So the thing that I want to talk about is that, with content takedown, it’s a very reactive measure that happens after the fact. So the content is already shared. You go through this mechanism. It can take days, months, years, or it will never happen. It will never be taken down. That also happens. I’ve reported many times, and it doesn’t get taken down. And there’s none of this sense of justice, for the people who are wronged, after the content’s already been up there, and then you take it down, but the damage is already done. The wound is already there. So I think it’ll be interesting to think about what are the kinds of design friction that you can introduce that stops the content from being shared? And I think my behavioral economist colleague will have more to say probably. But how do you stop it from happening so that you’re not in the position of having to take down? And as Ada mentioned, that they’re already coming up with guidelines and practices that would be nice for platforms to use to take down, but what if this was used before it even comes up? Or when someone goes online to insult a woman, for whatever reason, that there’s a nudge that says, are you sure you wanna do that? You really wanna, what do you benefit from saying this? But then, of course, on the other end of that, it’s also very problematic. So I really want to acknowledge that this method is problematic because this sort of shadow banning has been used against feminist movements, against marginalized people, to silence them. So when you talk about issues like colonization, racism, any of these issues, your posts actually don’t get shown. And this is the problem because we don’t have transparency on what are the algorithms that show or hide information. And really, all of us are at the mercy of the moral and political ideologies of whoever owns that platform. If they’re a right-wing, anti-feminist person, then those are the rules of the platform, and we are all tied to those rules. So what would be lovely in a really perfect world would be if these algorithmic decisions are co-created by all of us, and we understand that if we are doing child protection, counter-terrorism, that we all have decided these are the things we don’t want to be posted to be shown. We have decided it as government, as civil society, and as the platforms coming together. I think we really need platforms to take that accountability, to be more transparent, to do more audits, to do more research with governments and civil society so that we’re not looking at the platforms as enemies, acknowledging they do a lot, that there is more need for us to collaborate on setting the guidelines. So, thank you.
Alishah Shariff: Thanks, Neema. I think that was, yeah. I see. So yeah, just having that multi-stakeholder voice in shaping, I guess, the things that govern the platforms that we interact with, but also I really liked your point on introducing design friction. I think that’s a really interesting one. And so finally we’ll turn to Sandra and then we’ll go to questions from the floor.
Sandra Maximiano: So I couldn’t agree more with what has been said so far. Think about this. Think about if you wanted to take, like, you know, just do a skydiving activity. You go, you go to a firm, you know, sign up for this service and you always get some briefing about security and safety measures that you need to take. Okay, so you are buying a service and the firm that offers that service is forced to provide that briefing. I think what I really would like is online platforms that are providing us a service who’ll be also entitled and forced to give at least these briefings to all of us about safety, about measures that we need to take as human beings. So we need to be aware of our cognitive biases, as I said, and what all this content and all this online interaction may impact on our decisions and on our behavior. So I think they should be more entitled to provide us that sort of information. Then what should be illegal offline should be illegal online. That’s the main principle. But then we have this gray area. What is not illegal offline and should be forced to be illegal online or take it down. So and here I’m more pro, let’s say, measures like nudge interventions, like some applying these behavioral insights and increasing awareness and giving more education, improving digital literacy and, of course, making us better users of online content and trying to be aware of what is there that can really damage us. But of course platforms, it needs to be very much easier for users to comply to platforms and that’s one of the biggest problems nowadays and we can see that as digital service coordinators that users, the first step that they have to take is to comply to a platform and then it’s very hard. It’s very even hard to, you know, have to whom they can contact and this is something that platforms need to be responsible, take those complaints seriously and respond to users appropriately. And of course about algorithms, more audits are really needed, regular auditing algorithms for bias that can help identify and correct discriminatory patterns, diverse development teams. This is also something that platforms should look for. Building diverse teams of developers and stakeholders that can help mitigate biases in algorithms, for instance, transparency and accountability, making algorithms more transparent that can allow users to understand how decisions are made, which can help identify also potential discrimination and, again, giving users more education. Also playing again with the behavioral issue, the default settings is a very important point for behavioral economists. So, setting stronger privacy defaults that can protect vulnerable groups. For instance, social media platforms can make private accounts the default setting for children, ensuring that their information is more secure unless they choose to change it. So, changing the defaults, playing with those, it’s also very, very important. So, basically, we have to be, as I said, aware of this cognitive bias. Platforms should give us more information about this cognitive bias that all of us face and give us briefings, give us information and education and be more accountable and transparent.
Alishah Shariff: Thank you, Sandra. That’s great. I think that’s been a really thought-provoking set of interventions on that question and now we will open to the floor. We’ve got about 15 minutes. So, if you’d like to ask a question, I’d encourage you to go to the microphone at the front just so that we can make sure everyone can hear and we’ll put these headsets in.
Anusha Rahman Khan: Thank you very much. I’m Anusha Rehman Khan. I’m a former minister for information technology and telecommunications and I’ve remained the minister for five years and I’m somebody who enacted the cyber crime law in 2016, which gave and introduced 28 new penalties and criminalized the offense. As an offense, the dignity of natural person was, if violated, would result in criminal penalty of going to the jail or being find. So we all know that it is important to legislate. We also know that when we are legislating and when we are creating new offenses of such nature, we have noticed that the interest lobbies, the interest groups, come out very strong against such activities. And we all know that the funding is provided by the commercial interest holders. So when in 2016 I was trying to make the enactment, I had a huge resistance from the interest groups. And at that time, it was difficult for people to appreciate that how they are being played in the hands of the commercial interest. And then I noticed that similar people, similar interest groups, make commercial interest for themselves. From the law that was enacted later on, we found out the same interest groups were creating and found it as an opportunity to generate revenue for their own interest later on. So this is a game that is being played globally. And we, by now, have seen the games that are being played in for this revenue generation at the cost of the dignity of a natural person. It is not just the women, not just the children, not just the girls. It is all the people on this globe who get affected by the abuse online. Now, the questions. My question and my ask is from the Minister from Malaysia. I’ve heard you and you are very eloquent and your clarity is really appreciated. Now, what do you think in your experience that after legislating in Malaysia, have you been able to overcome the difficulties the enforcement entails? Because I feel, having been the former minister and also now chairing the standing committee, part of the information technology system for the last 32 years, that the time has come that we stop begging the social media platforms now. Because we cannot continue to remain hostage to requests made for the welfare of our citizens. So what is it that we can together do to make sure that we introduce the mechanisms where we do not expose our children, our girls and our women, at the hands of those people who probably have a different philosophy about content online. So when people are sitting, perhaps in the West, have a different ideology and different legal system governing them. But people sitting in the East have a different value system. We are a country where a single aspersion on a girl can cause her to jump from the window without waiting for the content to get removed. This has been the major issue for me that we in the East and the Far East live in a different value system. What is it that we think that together we today come by and bring out a solution? I do not think that the commercial interest and the revenue generation is going to allow you to provide the civil protection that is needed. So maybe you could guide me and tell me that what is it in your mind that we need to do and come forward with some very solid recommendations. Thank you. Thank you.
Alishah Shariff: Thank you. Are you happy to answer that? I think maybe if we could do a really quick response to that one and then maybe also.
Teo Nie Ching: Thank you madam. Thank you for your questions. Frankly speaking, after what we have been trying to do in Malaysia, passing the law is easy. Being in the government means that we have the majority in Parliament. So passing the law is easy, it’s relatively easy. Of course we have to do a lot of engagement, consultation etc. But passing the law itself is not too difficult. However, as you rightly pointed out, to enforce it, it will be super super difficult. It will be super super challenging and as I mentioned to every one of you here just now, we need to admit that Malaysia, even though we have introduced this licensing regime, supposed to be implemented since 1st of January this year. But however, until today, only X, TikTok that is Baidans and also Telegram. They got more than 8 million users in Malaysia. They came to us, get the license. But however, until today, Meta and Google have yet to apply the license from Malaysian government. So but the question, next question would be what can we do? First of all, I think it is too difficult for Malaysia to deal with this tech giant. It’s too difficult. So I’m really hoping that we can actually have a common standard imposed on this social media platform. My neighbouring country, Singapore, I think they are doing something, I myself I think it’s a good idea, i.e. they impose a duty on Meta. Meta must verify the identity of every advertiser if the advertisement is targeting Singapore citizens. And Meta actually is doing that and that is partly because Meta has an office in Singapore and they are deemed to be licensed as well. So Meta is actually doing this in Singapore. So my question actually would be, why can’t you do it for Malaysia? Because if you verify the identity of the advertiser, then it will be much, much easier for us to identify who are scammers, who are those behind this account promoting online gambling, etc. Why are you only doing it in Singapore and not the rest of the world? So to me, it is very, very important if we can have one international organisation identifying what are the responsibilities that should be carried out by the platforms instead of one individual country. Because as Malaysia, our negotiation power is just too limited. And at the same time, I think to overcome the issue that the standard is set by the West, I think it is very, very important for us to engage this platform as a bloc. For example, instead of engaging, instead of Malaysia trying to engage with this platform, we are hoping that ASEAN as a whole, we can engage with this platform. If you engage with Malaysia, maybe they worry that the Malaysian government will abuse our power to restrict the freedom of expression, but how about ASEAN as a bloc? Because as 10 ASEAN countries, we have similar culture, we understand each other better, and therefore we shall be able to set a standard that actually meets our cultural and also a history and religious background, etc. So I think it is important for us not to, you know, apply one standard, but we understand the world as different, different, multi-polar or different, different region whereby we can sit down and discuss about the standard that should be imposed on our platforms at different, different region. That is something really I would like to propose. Thank you.
Alishah Shariff: Thank you. Okay, there’s going to be some future cooperation here, so that’s great. I’ll turn briefly to Nigat who also wanted to provide some comments, and if we can keep them short, that would be great.
Nighat Dad: Very briefly. I think governments really need to understand that we are here in a multi-stakeholder spirit, and when we make national policies, multi-stakeholder means government, industry, civil society, and I think civil society is a critical space because when they present policies and regulations, it’s a role of civil society to basically think of critical points and nuances and hold the government also accountable. I think when we are talking about accountability, it’s about all powerful actors, government and platforms. Thank you.
Alishah Shariff: Yeah, that’s a really important perspective to bring. Okay, we’ll go to our next question in the room.
Andrew Campling: Thank you. Good morning. My name is Andrew Campling. I run a consultancy, and I’m a trustee of the Internet Watch Foundation, which finds and takes down with partner hotlines CSAM material around the world. Over 300 million children annually are the victims of technology-facilitated sexual abuse and exploitation. That’s about 14% of the world’s children are victims every year. So with that in mind, does the panel agree that we should mandate the use of privacy-preserving age estimation or verification technology to stop children from accessing adult platforms and adult content, and also from adults accessing child-centric platforms and opening child accounts so they can target children? And also, does the panel agree that we should make better use of technologies like client-side scanning to prevent the use of messaging platforms like WhatsApp from being used to share CSAM at scale around the world, which you can do in a privacy-preserving way? Thank you.
Alishah Shariff: Thank you. I think we’ll take one more question, and then I’ll open it up. Thank you very much, and I must start by congratulating the panel. It looks like there is a bit less testosterone on the panel today. It was a girls’ day this morning.
Audience: But my name is John Kiariye from Kenya, and mine is more of a comment that, seated at the IGF, we are able to have a conversation around what it is that regulators can do. And regulators have other platforms to be able to learn. and what to do with the technology that is availed to us. But if we are talking about human-centered design, we’ve got to remind ourselves that the offenders are human. The victims are humans. And we have to look and see beyond what is happening online and see if there are opportunities on already existing human structures in community. Because the technical stuff that we would talk about at IGF, some of it is not applicable practically in some jurisdictions. For example, we come from places where big tech has got platforms that people are interacting with, but they do not have physical presence in some of these jurisdictions. So you have no place to go and have a conversation with these big tech to ask them to do some of these things that we are saying at IGF. But if we look at an already existing structure within community, then we might find an opportunity to empower the victim in the sense that if it is a child who is under threat, in a school, there are already existing social structures. There are social clubs. For example, the lessons we are learning from Kenya, we’ve got clubs like the scouting clubs and the girl guides that already exist. And for young people, we know that if you make it cool, for them it becomes the truth. So what if this discussion starts offline for the victim so that by the time they are getting online, they already have the tools and they’re empowered to get this done? Because the bully is a human. The victim is a human. So if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully. And they can do that online. But if they were trained offline long before they got onto the internet, then maybe it can become a movement that. that saves a generation. So my point here and the comment is that even as we are focusing on the technology, let us not forget that this is technology for humans and there are already existing social setups. These social setups could be family, they could be school, they could be clubs, and all these other social setups that already exist before we get even online. We will leave it to the regulator to deal with the big tech because that animal is too big for the victim to face up, I thank you.
Alishah Shariff: Thank you, thank you. I think we’ll answer Andrew’s question first. So I think there was two parts to that. There was something around age verification or creation of child accounts and whether that could be a preventative action and then also something on client side scanning on device and whether that’s a good kind of proactive measure. I don’t know if there’s anyone in particular who wants to take that one. Would be good to hear from, yes? Yeah, okay, Neema and then Raul, okay.
Neema Iyer: I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.
Alishah Shariff: Thank you, and Raul?
Raoul Danniel Abellar Manuel: On our end in the Philippines, we’ve had this observation that sometimes the best way to solve a problem is to find the underlying basis for such a problem because directly confronting the problem may not be totally enough. For example, in the case of CISA-M and how young people are being used for these very bad objectives, we’ve had a realization that the economic basis is really a primary factor that drives children and unfortunately their relatives to have this kind of livelihood so that they can live from day to day. So we also have measures to really address issues like poverty, child hunger, and all that, alongside, of course, preventing the spread and the proliferation of these kinds of materials that exploit children. And I would just like to refer also to another point regarding how difficult it is, really, especially for those in the Global South to hold social media platforms to account. I can sympathize with our colleagues here and I also agree that we need to form a coordinated response, really, because in our case, when we invited representatives from these social media platforms, they did not attend our first two hearings and their reason was simply because they did not have an office in the Philippines, so why bother to attend? And we were insulted by such kind of a response because we just want to have concerted action on these issues that we are talking about. So we kind of threatened them with a subpoena and the prospects of an arrest if they will not attend the hearing. So fortunately, by the third hearing, they attended and that was the start of having them send representatives. But of course, we can’t act alone and we really have to work collaboratively. Thank you.
Alishah Shariff: Thank you. We actually only have a couple of minutes left. I think, Sandra, would you like to offer a kind of final comment?
Sandra Maximiano: Yes, just to add that and reinforce the point that what is illegal offline should continue to be illegal. Online. And, if we restrict children to have access to certain services offline, certain contexts, I think we should also take the same approach online. But that doesn’t mean, of course, you know, make every account private and ban every sort of possibility or behavior. So there are other better approaches rather than just going for extreme options. But I also like to add that this last intervention was very important, and thanks a lot for it, because we are humans, and we need, of course, to be aware of our shortcomings, our biases as humans, and that need to be taught, as it was said, in schools, and we need to be more prepared now to deal with this usage of cognitive biases online. So it’s basically making usage of technology to take advantage of them. So we need to be aware, we need to be more aware of that, and so you need more digital literacy for sure. But let me also add something as an economist. We are in a world that there are lots of incentives for platforms to start developing features that take into account safety and security, and make a profit out of it. And here I’m just talking as an economist, and we will see that happening. And then it will be left to us, and there should be some minimum standards that should be for everyone, and regulators should impose those, but also I’m pretty sure that there will be selling any sort of features that we, as users, will be able to buy and to add on to our systems to increase the level of protection. So there is like a huge market out there that is going to explore the safety, the security, and be prepared as consumers, as users, to make that choice. And it will depend on our risk aversion, our risk preference, and safety preferences, but it will come.
Alishah Shariff: Thank you, Sandra. I think that is all we have time for today, so I’d like to say a massive thank you.
Arda Gerkens: Could I make one remark, which I think is really important? A positive message. Look the way we are here together as regulators. I’ve been at the IGF for 15 years. There’s a lot changing, and there’s a lot of politicians involved in that change. What we need to do now is come together globally, because, indeed, Malaysia has a problem with some platforms. Other countries might have problems with other platforms. Once we are able to get some platforms to obey to our regulations, other ones will pop up. We really need to work together globally. We’re part of Global Online Safety Regulators Network, GOZERN. That’s a new initiative. I invite everybody who wants to be a part of it, please go to the website, GOSRN. Let’s see how we can tackle this problem, because it’s a global problem, and we need to work together here. Thank you.
Alishah Shariff: Thank you, Ada. I think that’s really the takeaway from this session for me, is that, through having this kind of multistakeholder, multidisciplinary discussion, this is the only way we will be able to tackle some of these challenges, and also to take into account intersectionality, geographical differences, the way platforms behave differently in different jurisdictions. Just very quickly, the opening of the IGF, the official opening, is at 11 a.m. in the plenary room on the ground floor, so we hope to see you there. Thanks once again to all the panelists, and to all of you, and to our online audience. Thank you.
Neema Iyer
Speech speed
179 words per minute
Speech length
1577 words
Speech time
525 seconds
One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms
Explanation
Research conducted with 3,000 women across Africa revealed that approximately one-third had experienced some form of online violence. This abuse led many women to completely delete their online presence because they were unaware of available reporting mechanisms and felt authorities would not listen to them if they sought help.
Evidence
Research study with 3,000 women across Africa showing one in three women experienced online violence
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Sociocultural
Agreed with
– Nighat Dad
– Sandra Maximiano
Agreed on
Marginalized communities face disproportionate online harm with inadequate support systems
Women politicians face sexist and sexualized abuse online, causing many to avoid having online profiles or participating in digital spaces
Explanation
A social media analysis of women politicians during Uganda’s 2021 election showed they were frequently targets of sexist and sexualized abuse. The fear of such abuse meant many women politicians chose not to have online profiles or participate in digital political discourse.
Evidence
Social media analysis of women politicians during the Ugandan 2021 election
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Sociocultural
AI systems create grave issues including under-representation, data bias, algorithmic discrimination, digital surveillance, and labor exploitation affecting marginalized women
Explanation
Research on AI’s impact on women revealed multiple systemic problems that disproportionately affect marginalized women. These include biased data representation, discriminatory algorithms, increased surveillance capabilities, and threats to low-wage jobs typically occupied by women.
Evidence
Research study on the impact of AI on women showing under-representation, data bias, algorithmic discrimination, digital surveillance, censorship, labor exploitation, and threats to low-wage jobs
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Economic
Intersecting inequalities create large gaps in digital literacy and access, with platforms often not prioritizing smaller markets or local languages
Explanation
Marginalized communities face multiple overlapping disadvantages including limited digital access and skills. Platforms often neglect smaller markets, with countries like Uganda having 50+ languages but lacking platform support for local languages due to limited market share.
Evidence
Uganda has about 50 languages spoken but platforms don’t prioritize smaller countries due to limited market share
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Development | Sociocultural
Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them
Explanation
Cybercrime laws, data protection laws, and other protective legislation are frequently misused to target women, activists, and dissenting voices. Instead of providing protection, these laws become tools of oppression against the very groups they were meant to safeguard.
Evidence
Cybercrime laws and data protection laws have been used against women, dissenting voices, and activists to punish rather than protect them
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Legal and regulatory | Human rights
Agreed with
– Nighat Dad
Agreed on
Laws designed to protect can be weaponized against vulnerable groups
Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing
Explanation
Current content moderation relies on reactive takedown processes that occur after harmful content has already been shared and caused damage. There’s a need for proactive design elements that create friction to prevent harmful content from being shared in the first place, though this approach has its own risks of censorship.
Evidence
Personal experience reporting content that doesn’t get taken down, and acknowledgment that damage is already done even when content is eventually removed
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Sociocultural
Agreed with
– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
Agreed on
Platform accountability requires transparency beyond content takedowns
Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies
Explanation
Current algorithmic content moderation reflects the moral and political ideologies of platform owners, creating unfair power dynamics. A collaborative approach involving multiple stakeholders would ensure more balanced and transparent decision-making about what content should be promoted or suppressed.
Evidence
Shadow banning has been used against feminist movements and marginalized people discussing issues like colonization and racism
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Human rights
Nighat Dad
Speech speed
137 words per minute
Speech length
1238 words
Speech time
542 seconds
Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment
Explanation
The Digital Rights Foundation’s helpline has processed over 20,000 complaints since 2016, receiving hundreds of reports monthly from young women, female journalists, influencers, politicians, and students. These complaints primarily involve blackmail and harassment through non-consensual intimate images.
Evidence
Over 20,000 complaints handled since 2016 through digital security helpline, with hundreds of complaints from young women monthly
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Cybersecurity
Agreed with
– Neema Iyer
– Sandra Maximiano
Agreed on
Marginalized communities face disproportionate online harm with inadequate support systems
Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide
Explanation
The increasing prevalence of deepfake technology has created new forms of harm where people are blackmailed and silenced using intimate images they never consented to, some of which aren’t even real. The psychological impact includes severe reputational damage, emotional trauma, and in extreme cases, suicide.
Evidence
Rise in deepfakes over the last one and a half years, with cases of women committing suicide due to the harm
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Cybersecurity
Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment
Explanation
Meta Oversight Board cases revealed significant disparities in platform response times based on geography and prominence. A US celebrity’s deepfake case received immediate attention due to media coverage, while an Indian woman’s case wasn’t flagged until the Oversight Board intervened.
Evidence
Meta Oversight Board reviewed two deepfake cases – US celebrity case received quick response due to media attention, while Indian case wasn’t flagged until Oversight Board raised it
Major discussion point
Platform Accountability and Content Moderation
Topics
Human rights | Legal and regulatory
Agreed with
– Arda Gerkens
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer
Agreed on
Platform accountability requires transparency beyond content takedowns
Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only
Explanation
Meta and other platforms have reduced their proactive content moderation, focusing mainly on illegal or high-severity cases while expecting users to handle more moderation themselves. This is especially problematic in South Asia where users may not know how to report, systems are only in English, and fear of backlash is significant.
Evidence
Meta has scaled back proactive enforcement systems; reporting systems are in English, not regional languages; documented cases in India and Pakistan where women reporting abuse faced further harassment
Major discussion point
Platform Accountability and Content Moderation
Topics
Human rights | Sociocultural
Agreed with
– Neema Iyer
Agreed on
Laws designed to protect can be weaponized against vulnerable groups
Raoul Danniel Abellar Manuel
Speech speed
132 words per minute
Speech length
1436 words
Speech time
650 seconds
Philippines passed Republic Act 11930 addressing online sexual abuse of children with extraterritorial jurisdiction provisions
Explanation
The Philippines enacted Republic Act 11930 in July 2022 to combat online sexual abuse and exploitation of children. A key component is the assertion of extraterritorial jurisdiction, allowing the state to prosecute offenses that commence in the Philippines or are committed abroad by Filipino citizens against Philippine citizens.
Evidence
Republic Act 11930 lapsed on July 30, 2022, includes extraterritorial jurisdiction provisions recognizing coordinated networks involving multiple locations
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Cybersecurity
House of Representatives approved expanded anti-violence bill defining psychological violence through electronic devices as violence against women
Explanation
The Philippine House of Representatives passed legislation expanding the definition of violence against women to include psychological violence committed through electronic or ICT devices. However, the bill still awaits Senate approval in the bicameral system.
Evidence
House of Representatives approved the expanded anti-violence bill, but waiting for Senate deliberations in the bicameral system
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Human rights
Amendments to Safe Spaces Act set higher standards for government officials promoting discrimination through digital platforms
Explanation
The Philippines approved committee-level amendments to the Safe Spaces Act that establish stricter standards for government officials who promote sexual harassment or discrimination against LGBT communities through digital or social media platforms.
Evidence
Amendments approved at committee level targeting government officials who discriminate against LGBT community through digital platforms
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Human rights
Pending bill seeks to criminalize ‘red tagging’ – labeling individuals as state enemies or terrorists without basis
Explanation
The House of Representatives has a pending bill to criminalize the practice of ‘red tagging’ – falsely labeling individuals or groups as state enemies, subversives, or terrorists without proper basis. The Supreme Court has adopted this term, recognizing it as a source of harm that extends into the physical world.
Evidence
Supreme Court adopted the term ‘red tagging’ and recognized it as causing harm that transcends into the physical world
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Human rights
Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts
Explanation
Beyond content takedowns, platforms should take greater responsibility by proactively identifying and reporting sources of harmful content to government authorities. This would shift from reactive individual post removal to proactive source identification, working with independent media and digital coalitions.
Evidence
Platforms can monitor notable sources of harmful content including bullying, hate speech, indiscriminate tagging, scams, and should work with independent media and digital coalitions
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Cybersecurity
Agreed with
– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Neema Iyer
Agreed on
Platform accountability requires transparency beyond content takedowns
Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence
Explanation
When the Philippine House of Representatives invited social media platform representatives to hearings, they initially refused to attend, stating they had no office in the Philippines and therefore no obligation to participate. Only after threats of subpoenas and arrest did they begin attending by the third hearing.
Evidence
Platforms did not attend first two hearings claiming no office in Philippines; attended third hearing after threats of subpoena and arrest
Major discussion point
International Cooperation and Enforcement Challenges
Topics
Legal and regulatory | Economic
Agreed with
– Teo Nie Ching
– Anusha Rahman Khan
Agreed on
Individual countries lack sufficient power to regulate global tech platforms effectively
Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material
Explanation
The root cause of child sexual abuse and exploitation often lies in economic desperation, where poverty drives children and their families to engage in such activities for daily survival. Effective solutions must address underlying economic issues like poverty and child hunger alongside technical and legal measures.
Evidence
Philippines ranks as hotspot for online sexual abuse of children; economic basis drives children and relatives to this livelihood for day-to-day survival
Major discussion point
Age Verification and Privacy Concerns
Topics
Development | Cybersecurity
Teo Nie Ching
Speech speed
153 words per minute
Speech length
1789 words
Speech time
699 seconds
Malaysia amended Communication and Multimedia Act after 26 years, increasing penalties for child sexual abuse material and grooming
Explanation
Malaysia made its first amendment to the Communication and Multimedia Act in 26 years, significantly increasing penalties for dissemination of child sexual abuse material, grooming, and similar communications through digital platforms. The law imposes heavier penalties when minors are involved and grants the Malaysian Communications and Multimedia Commission authority to instruct service providers to block or remove harmful content.
Evidence
First amendment in 26 years to Communication and Multimedia Act, with heavier penalties when minors are involved and new powers for MCMC to instruct content blocking/removal
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Cybersecurity
Malaysia developed code of conduct for social media platforms with over 8 million users, though major platforms like Meta and Google have not applied for licenses
Explanation
Malaysia implemented a licensing regime with a code of conduct targeting major social media platforms serving over 8 million users (about 25% of Malaysia’s 35 million population). However, despite the January 2025 implementation date, major platforms Meta and Google have not applied for licenses, while only X, TikTok, and Telegram have complied.
Evidence
Licensing regime for platforms with 8+ million users (25% of 35 million population); only X, TikTok, and Telegram applied for licenses while Meta and Google have not
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Economic
Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal
Explanation
Platform reporting systems are inadequate, as demonstrated by cases where even verified public figures like badminton player Dato Lee Chong Wei cannot successfully report scam content using their accounts. Instead, they must manually compile links and send them to regulators, who then forward them to platforms for removal.
Evidence
Dato Lee Chong Wei, a famous badminton player with verified Facebook account, cannot successfully use built-in reporting and must send links to MCMC for forwarding to Meta
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Economic
Agreed with
– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer
Agreed on
Platform accountability requires transparency beyond content takedowns
Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess
Explanation
While platforms may remove scam-related posts, there’s no transparency about what actions are taken against the actual scammers or those who sponsored the posts. Malaysia lacks access to data about advertising revenue collected from their jurisdiction, making it difficult to hold platforms accountable for their broader responsibilities.
Evidence
No transparency on actions against scammers who sponsor posts; no access to data on advertising revenue collected from Malaysia or ASEAN region
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Economic
Agreed with
– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer
Agreed on
Platform accountability requires transparency beyond content takedowns
Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN
Explanation
Malaysia’s experience shows that individual countries have limited negotiation power with major tech platforms. A coordinated approach through regional blocs like ASEAN would provide stronger negotiating positions and allow for standards that reflect regional cultural, historical, and religious contexts rather than Western-imposed standards.
Evidence
Meta complies with advertiser identity verification in Singapore but not Malaysia; Malaysia alone has insufficient negotiation power with tech giants
Major discussion point
International Cooperation and Enforcement Challenges
Topics
Legal and regulatory | Economic
Agreed with
– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan
Agreed on
Individual countries lack sufficient power to regulate global tech platforms effectively
Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches
Explanation
Rather than applying universal Western standards, different regions should be able to establish standards that align with their specific cultural, historical, and religious contexts. Regional blocs like ASEAN, with similar cultural understanding, could set appropriate standards for platform regulation in their jurisdictions.
Evidence
ASEAN countries have similar culture and understand each other better, allowing them to set standards meeting their cultural, historical, and religious backgrounds
Major discussion point
International Cooperation and Enforcement Challenges
Topics
Legal and regulatory | Sociocultural
Arda Gerkens
Speech speed
170 words per minute
Speech length
1792 words
Speech time
629 seconds
Netherlands established unique regulatory body ATKM with special powers to identify and remove terrorist content and child sexual abuse material
Explanation
The Netherlands created ATKM, a unique regulatory body with special authority to dive into and identify harmful terrorist content and child sexual abuse material. The organization can order content removal and fine non-compliant entities, representing a first-of-its-kind regulatory approach with direct content intervention powers.
Evidence
ATKM is described as unique and first regulator with special right to dive into terrorist and child sexual abuse content, with power to remove content and fine non-compliant entities
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Cybersecurity
Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion
Explanation
ATKM has identified a concerning trend where extremist groups create Telegram channels focused on mental health and eating disorders to target vulnerable children. These groups extract personal information through grooming, then extort children into harmful activities like self-harm and creating sexual images, which are then distributed.
Evidence
Terrorist groups create Telegram channels for kids to discuss mental health and eating disorders, groom information, then extort them to carve bodies or make sexual images
Major discussion point
Hybrid Threats and Emerging Challenges
Topics
Cybersecurity | Human rights
Hybridization of terrorist content with child sexual abuse material is radicalizing children rapidly, leading to cases of very young potential attackers
Explanation
There’s an emerging pattern of terrorist environments containing child sexual abuse material, creating hybrid threats that rapidly radicalize vulnerable children. This hybridization has accelerated to the point where very young children in Europe have been found on the verge of committing attacks.
Evidence
Finding child sexual abuse material within online terrorist environments; recent cases in Europe of very young kids at the verge of committing attacks
Major discussion point
Hybrid Threats and Emerging Challenges
Topics
Cybersecurity | Human rights
Coordinated approach needed to tackle hybrid problems that span multiple regulatory domains
Explanation
The hybrid nature of emerging threats requires coordination across different regulatory domains and organizations. While ATKM focuses on terrorism and child sexual abuse, issues like eating disorders and mental health fall under other organizations’ purview, necessitating collaborative approaches to address interconnected problems.
Evidence
ATKM cannot address eating disorders or mental health problems directly, but these issues are connected to terrorist grooming activities
Major discussion point
Hybrid Threats and Emerging Challenges
Topics
Legal and regulatory | Cybersecurity
Sandra Maximiano
Speech speed
123 words per minute
Speech length
2154 words
Speech time
1049 seconds
Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behavior and decision-making
Explanation
Behavioral economics reveals that users are not rational decision-makers but are influenced by psychological factors and cognitive biases. These include confirmation bias (seeking information that confirms existing beliefs), overconfidence bias (overestimating security knowledge), and optimism bias (underestimating personal risk of scams or breaches).
Evidence
Examples include confirmation bias leading to echo chambers, overconfidence bias causing risky behaviors like weak passwords, and optimism bias leading to inadequate precautions against online threats
Major discussion point
Behavioral Economics and Digital Safety
Topics
Human rights | Sociocultural
Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design
Explanation
While all users experience cognitive biases, certain vulnerable populations including children, disabled individuals, and those with mental health problems are disproportionately affected. Regulators must understand and account for these heightened vulnerabilities when designing policies and interventions.
Evidence
Children, disabled groups, and people with mental health problems have cognitive biases influencing their decisions even more than general population
Major discussion point
Behavioral Economics and Digital Safety
Topics
Human rights | Development
Agreed with
– Neema Iyer
– Nighat Dad
Agreed on
Marginalized communities face disproportionate online harm with inadequate support systems
AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation
Explanation
AI increases the economic value of exploiting cognitive biases and can cause harm to vulnerable groups even without malicious intent. For example, chatbots trained on typical adult conversations may use metaphors and jokes that individuals with autism interpret literally, potentially leading to harmful actions.
Evidence
Example of chatbots for autism training that may incorporate jokes and metaphors from typical adult conversations, which individuals with autism may interpret literally and act upon
Major discussion point
Behavioral Economics and Digital Safety
Topics
Human rights | Infrastructure
Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging
Explanation
The same behavioral insights that create vulnerabilities can be redirected to enhance protection. This includes designing user-friendly interfaces that consider cognitive load, implementing nudges that guide safer behaviors, and using social norms messaging to promote positive online conduct.
Evidence
Examples include framing cyberbullying information clearly, using social norms to highlight that most children don’t engage in bullying, and implementing reward systems for positive behavior
Major discussion point
Behavioral Economics and Digital Safety
Topics
Human rights | Sociocultural
Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals
Explanation
Marketing strategies extensively use behavioral insights to influence consumer behavior and increase sales. Regulators and policymakers should adopt these same techniques but redirect them toward promoting safety, security, and positive online behavior rather than commercial objectives.
Evidence
Marketing strategies use behavioral insights to sell more; regulators should use the same weapons but with different goals in mind
Major discussion point
Behavioral Economics and Digital Safety
Topics
Legal and regulatory | Economic
Platforms should provide safety briefings to users similar to how other service providers are required to give security information
Explanation
Just as service providers in other industries (like skydiving) are required to provide safety briefings before service delivery, online platforms should be mandated to provide users with information about cognitive biases, online risks, and safety measures. This would help users make more informed decisions about their online behavior.
Evidence
Comparison to skydiving services that must provide safety briefings before service delivery
Major discussion point
Platform Accountability and Content Moderation
Topics
Legal and regulatory | Human rights
What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach
Explanation
The fundamental principle should be that illegal activities offline should also be illegal online. However, when it comes to restricting access for children or implementing extreme measures like complete bans, there are better approaches than blanket restrictions that may be overly broad or ineffective.
Major discussion point
Age Verification and Privacy Concerns
Topics
Legal and regulatory | Human rights
Anusha Rahman Khan
Speech speed
148 words per minute
Speech length
594 words
Speech time
239 seconds
Former Pakistani minister enacted cyber crime law in 2016 introducing 28 new penalties criminalizing violations of natural person dignity
Explanation
As Pakistan’s former IT and telecommunications minister, Anusha Rahman Khan enacted comprehensive cybercrime legislation in 2016 that introduced 28 new criminal penalties. The law specifically criminalized violations of natural person dignity, with offenders facing jail time or fines for online abuse.
Evidence
Cyber crime law enacted in 2016 with 28 new penalties, criminalizing dignity violations of natural persons with jail time or fines
Major discussion point
Legislative and Regulatory Responses
Topics
Legal and regulatory | Human rights
Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination
Explanation
The fundamental challenge is that commercial interest groups and revenue generation motives of platforms conflict with the need to protect citizens from online harm. This creates a situation where countries become hostage to platform policies, particularly problematic when Western platforms apply different value systems to Eastern societies where online harm can have more severe consequences.
Evidence
Interest groups funded by commercial interests resisted cybercrime legislation; same groups later found revenue opportunities in the law; different value systems between East and West where single aspersion can cause suicide
Major discussion point
International Cooperation and Enforcement Challenges
Topics
Economic | Legal and regulatory
Agreed with
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
Agreed on
Individual countries lack sufficient power to regulate global tech platforms effectively
Audience
Speech speed
148 words per minute
Speech length
460 words
Speech time
185 seconds
Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs
Explanation
Rather than focusing solely on technical solutions, existing community structures such as schools, scouting clubs, girl guides, and family units should be utilized to empower potential victims before they encounter online threats. These established social frameworks can provide foundational protection and education.
Evidence
Examples from Kenya including scouting clubs and girl guides; social clubs in schools that already exist as community structures
Major discussion point
Community-Based Solutions
Topics
Development | Sociocultural
Offline education and empowerment can prepare young people with tools before they encounter online threats
Explanation
By training and empowering young people through offline education and community programs, they can be better prepared to handle online threats when they encounter them. This proactive approach focuses on building resilience and awareness before exposure to digital risks.
Evidence
If young people are trained offline before getting online, and if training makes behavior ‘cool’ for them, it becomes truth and can save a generation
Major discussion point
Community-Based Solutions
Topics
Development | Sociocultural
Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions
Explanation
Technology solutions alone are insufficient because both perpetrators and victims of online abuse are human beings embedded in social contexts. Effective interventions must address the human element through community-based approaches that work alongside technical measures, recognizing that many jurisdictions lack direct access to big tech platforms.
Evidence
Big tech platforms don’t have physical presence in many jurisdictions, making direct engagement impossible; both bullies and victims are human and can be influenced by community interventions
Major discussion point
Community-Based Solutions
Topics
Development | Human rights
Alishah Shariff
Speech speed
177 words per minute
Speech length
2027 words
Speech time
683 seconds
The digital world offers opportunities for connection, learning, and growth but also brings risks and downsides that are felt more acutely by vulnerable groups
Explanation
While digital technologies provide significant benefits for human connection and development, they simultaneously create new forms of harm and risk. These negative impacts disproportionately affect vulnerable populations including children, individuals with disabilities, and marginalized communities.
Evidence
Consequences of online harm can have ripple effects into real lives, causing distress, harm, and isolation
Major discussion point
Online Safety Challenges for Marginalized Communities
Topics
Human rights | Development
Effective policy responses to online harms require targeted, inclusive, and enforceable approaches developed through multistakeholder collaboration
Explanation
Addressing online safety challenges requires policy frameworks that are specifically designed for different contexts, include diverse perspectives, and can be effectively implemented. This necessitates collaboration between parliamentarians, regulators, and advocacy experts across different geographies.
Evidence
Session brings together diverse panel of parliamentarians, regulators, and advocacy experts across range of geographies and contexts
Major discussion point
International Cooperation and Enforcement Challenges
Topics
Legal and regulatory | Human rights
Andrew Campling
Speech speed
125 words per minute
Speech length
155 words
Speech time
73 seconds
Over 300 million children annually are victims of technology-facilitated sexual abuse and exploitation, representing about 14% of the world’s children
Explanation
The scale of child sexual abuse and exploitation facilitated by technology is massive, affecting approximately one in seven children globally each year. This statistic demonstrates the urgent need for comprehensive protective measures in digital spaces.
Evidence
Over 300 million children annually are victims, representing about 14% of world’s children; Internet Watch Foundation finds and takes down CSAM material with partner hotlines around the world
Major discussion point
Age Verification and Privacy Concerns
Topics
Cybersecurity | Human rights
Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children
Explanation
Technical solutions like age verification can help create barriers that prevent inappropriate access to platforms while maintaining privacy protections. This includes stopping children from accessing adult content and preventing adults from creating child accounts to target minors.
Evidence
Need to stop children from accessing adult platforms and adult content, and stop adults from accessing child-centric platforms and opening child accounts to target children
Major discussion point
Age Verification and Privacy Concerns
Topics
Cybersecurity | Human rights
Disagreed with
– Neema Iyer
Disagreed on
Age verification and privacy-preserving technologies for child protection
Client-side scanning technology should be better utilized to prevent messaging platforms from being used to share child sexual abuse material at scale
Explanation
Privacy-preserving technologies like client-side scanning can help detect and prevent the distribution of child sexual abuse material through encrypted messaging platforms. This approach can maintain user privacy while providing protection against large-scale distribution of harmful content.
Evidence
Messaging platforms like WhatsApp are being used to share CSAM at scale around the world, which can be addressed in a privacy-preserving way
Major discussion point
Age Verification and Privacy Concerns
Topics
Cybersecurity | Human rights
Agreements
Agreement points
Platform accountability requires transparency beyond content takedowns
Speakers
– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer
Arguments
Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal
Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess
Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment
Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts
Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing
Summary
All speakers agreed that current platform accountability mechanisms are insufficient, with particular emphasis on the need for transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses.
Topics
Legal and regulatory | Human rights | Economic
Individual countries lack sufficient power to regulate global tech platforms effectively
Speakers
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan
Arguments
Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN
Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence
Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination
Summary
Government representatives from Malaysia, Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, emphasizing the need for coordinated international or regional approaches to regulation.
Topics
Legal and regulatory | Economic
Marginalized communities face disproportionate online harm with inadequate support systems
Speakers
– Neema Iyer
– Nighat Dad
– Sandra Maximiano
Arguments
One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms
Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment
Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design
Summary
Civil society representatives agreed that vulnerable populations experience higher rates of online harm and face additional barriers in accessing help, requiring specialized approaches that account for their unique vulnerabilities.
Topics
Human rights | Sociocultural
Laws designed to protect can be weaponized against vulnerable groups
Speakers
– Neema Iyer
– Nighat Dad
Arguments
Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them
Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only
Summary
Both civil society advocates highlighted the paradox where protective legislation and platform policies can be misused to harm the very groups they were intended to protect, particularly in Global South contexts.
Topics
Legal and regulatory | Human rights
Similar viewpoints
Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.
Speakers
– Neema Iyer
– Sandra Maximiano
Arguments
Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies
Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging
Topics
Legal and regulatory | Human rights | Sociocultural
Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.
Speakers
– Raoul Danniel Abellar Manuel
– Audience
Arguments
Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material
Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions
Topics
Development | Human rights | Sociocultural
Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.
Speakers
– Arda Gerkens
– Nighat Dad
Arguments
Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion
Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide
Topics
Cybersecurity | Human rights
Unexpected consensus
Rejection of extreme age verification measures
Speakers
– Neema Iyer
– Sandra Maximiano
Arguments
Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing
What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach
Explanation
Despite coming from different professional backgrounds (civil society advocacy vs. regulatory economics), both speakers rejected blanket age verification or social media bans as solutions, instead favoring more nuanced approaches that preserve privacy while promoting safety.
Topics
Legal and regulatory | Human rights
Need for behavioral and design-based interventions over purely legal approaches
Speakers
– Sandra Maximiano
– Neema Iyer
– Audience
Arguments
Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals
Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing
Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs
Explanation
Unexpectedly, speakers from regulatory, advocacy, and community perspectives all converged on the idea that behavioral interventions and proactive design changes are more effective than reactive legal measures, representing a shift from traditional regulatory thinking.
Topics
Human rights | Sociocultural | Development
Overall assessment
Summary
The speakers demonstrated strong consensus on several key issues: the inadequacy of current platform accountability mechanisms, the need for international coordination to effectively regulate global tech platforms, the disproportionate impact of online harm on marginalized communities, and the limitations of purely reactive legal approaches. There was also notable agreement on the need for more proactive, design-based interventions and multi-stakeholder collaboration.
Consensus level
High level of consensus with significant implications for policy development. The agreement across different stakeholder groups (government officials, regulators, civil society advocates) suggests these issues transcend traditional boundaries and require coordinated responses. The consensus on moving beyond reactive measures toward proactive design interventions represents a potential paradigm shift in online safety approaches. However, the challenge remains in translating this consensus into actionable policies given the power imbalances between individual nations and global tech platforms.
Differences
Different viewpoints
Age verification and privacy-preserving technologies for child protection
Speakers
– Neema Iyer
– Andrew Campling
Arguments
I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.
Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children
Summary
Andrew Campling advocates for mandatory privacy-preserving age verification technology to protect children online, while Neema Iyer strongly opposes such measures, arguing they compromise privacy and are ineffective since people will circumvent them anyway.
Topics
Cybersecurity | Human rights
Unexpected differences
Approach to addressing root causes of child exploitation
Speakers
– Raoul Danniel Abellar Manuel
– Andrew Campling
Arguments
Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material
Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children
Explanation
While both speakers are deeply concerned about child protection online, they approach the problem from fundamentally different angles. The Philippine MP emphasizes addressing underlying economic causes like poverty that drive families to exploit children, while the Internet Watch Foundation trustee focuses on technical solutions like age verification. This disagreement is unexpected because both are child protection advocates but see completely different primary solutions.
Topics
Development | Cybersecurity | Human rights
Overall assessment
Summary
The main areas of disagreement center around the balance between privacy and safety (particularly regarding age verification), the effectiveness of technical versus socioeconomic solutions for child protection, and the specific mechanisms for international cooperation in platform regulation.
Disagreement level
The level of disagreement is moderate but significant in its implications. While speakers largely agree on the problems (online harm to vulnerable groups, platform accountability issues, need for international cooperation), they diverge substantially on solutions. The privacy versus safety debate represents a fundamental tension in digital rights policy, while the technical versus socioeconomic approach to child protection reflects different philosophical frameworks for addressing online harm. These disagreements suggest that achieving consensus on specific policy measures will require careful negotiation and potentially hybrid approaches that incorporate multiple perspectives.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.
Speakers
– Neema Iyer
– Sandra Maximiano
Arguments
Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies
Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging
Topics
Legal and regulatory | Human rights | Sociocultural
Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.
Speakers
– Raoul Danniel Abellar Manuel
– Audience
Arguments
Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material
Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions
Topics
Development | Human rights | Sociocultural
Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.
Speakers
– Arda Gerkens
– Nighat Dad
Arguments
Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion
Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide
Topics
Cybersecurity | Human rights
Takeaways
Key takeaways
Online harm disproportionately affects marginalized communities in the Global South due to intersecting inequalities, language barriers, and lack of platform prioritization
Legislative frameworks are often too narrow, focusing on takedowns rather than prevention, and can be weaponized against the very groups they aim to protect
Platform accountability requires transparency in content moderation processes, algorithmic decision-making, and actions taken against violators beyond simple content removal
Individual countries lack sufficient negotiation power with tech giants, necessitating coordinated regional or international approaches
Behavioral economics insights can be leveraged to design better safety interventions, using the same cognitive bias understanding that platforms use for engagement
Hybrid threats combining terrorism, child exploitation, and targeting of vulnerable groups through mental health platforms represent emerging challenges requiring coordinated responses
Prevention through design friction and community-based offline education is more effective than reactive content takedown measures
Multi-stakeholder collaboration between governments, platforms, and civil society is essential for developing effective and balanced online safety policies
Resolutions and action items
Invitation extended for regulators to join the Global Online Safety Regulators Network (GOSRN) to facilitate international cooperation
Proposal for ASEAN countries to engage with platforms as a bloc rather than individually to increase negotiation power
Recommendation for platforms to provide mandatory safety briefings to users similar to other service providers
Call for platforms to proactively report sources of harmful content to governments rather than just responding to individual takedown requests
Suggestion for algorithmic decision-making to be co-created by governments, civil society, and platforms together
Proposal to leverage existing community structures (schools, clubs, families) to provide offline education and empowerment before online exposure
Unresolved issues
How to effectively enforce regulations when major platforms refuse to comply with licensing requirements or attend government hearings
Balancing privacy rights with age verification and content scanning technologies for child protection
Addressing the fundamental economic incentives that drive platforms to prioritize engagement over safety
Developing culturally appropriate standards for different regions while maintaining international cooperation
Creating effective reporting mechanisms in local languages and contexts for Global South users
Preventing the weaponization of online safety laws against marginalized groups and activists
Addressing the gap between Western-designed platforms and Eastern value systems and legal frameworks
Managing the rise of platforms with no accountability mechanisms or human rights teams
Suggested compromises
Implementing minimum universal safety standards while allowing regional variations for cultural and contextual differences
Using behavioral nudges and design friction as alternatives to extreme restriction measures like complete social media bans
Combining technical solutions with community-based offline interventions rather than relying solely on either approach
Establishing transparency requirements for platform actions against violators while respecting commercial confidentiality
Creating tiered accountability systems where platforms with larger user bases face stricter requirements
Developing privacy-preserving safety technologies that protect users without compromising fundamental rights
Balancing proactive content moderation with protection against algorithmic bias and shadow banning of legitimate content
Thought provoking comments
The laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.
Speaker
Neema Iyer
Reason
This comment is deeply insightful because it reveals the paradox of protective legislation becoming a tool of oppression. It challenges the assumption that creating laws automatically leads to protection and highlights how power structures can co-opt well-intentioned regulations.
Impact
This comment fundamentally shifted the discussion from focusing solely on creating new regulations to examining how existing laws are implemented and enforced. It introduced the critical concept that legislative frameworks can have unintended consequences, setting the stage for other panelists to discuss the importance of balanced, enforceable policies.
We see more and more hybridization of these types of content mixed together with other content… we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids.
Speaker
Arda Gerkens
Reason
This observation is thought-provoking because it reveals the evolution of online threats from discrete categories to complex, interconnected forms of harm. It demonstrates how traditional regulatory silos may be inadequate for addressing modern digital threats.
Impact
This comment introduced a new dimension to the discussion about the complexity of online harms. It moved the conversation beyond simple content takedowns to understanding how different forms of abuse intersect and require coordinated responses across different regulatory domains.
If the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines?
Speaker
Nighat Dad
Reason
This comment powerfully exposes the inequality in platform responses based on visibility and geography. It challenges the notion of equal protection online and highlights how media attention becomes a prerequisite for justice.
Impact
This comment crystallized the discussion around global inequities in platform accountability. It prompted other speakers to discuss the need for coordinated international responses and highlighted how current systems fail those without voice or visibility.
Behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions… we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions.
Speaker
Sandra Maximiano
Reason
This comment introduced an entirely new analytical framework to the discussion, shifting from purely regulatory and technical approaches to understanding the psychological mechanisms that make people vulnerable online. It’s innovative in suggesting that the same tools used to exploit can be used to protect.
Impact
This intervention fundamentally broadened the scope of the discussion beyond traditional regulatory approaches. It introduced the concept of ‘nudging’ for protection and influenced subsequent speakers to consider design-based solutions rather than just content moderation.
I think we really need to think broader about how we are legislating about online violence… legislative frameworks are often too narrow. They focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women.
Speaker
Neema Iyer
Reason
This comment challenges the dominant paradigm of online safety regulation by questioning both the scope and cultural appropriateness of current approaches. It calls for more nuanced, context-specific solutions.
Impact
This comment established a critical theme that ran throughout the discussion – the inadequacy of one-size-fits-all solutions and the need for culturally sensitive, comprehensive approaches to online harm prevention.
The offenders are human. The victims are humans… if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully… if they were trained offline long before they got onto the internet, then maybe it can become a movement that saves a generation.
Speaker
John Kiariye
Reason
This comment reframes the entire discussion by emphasizing the human element behind technology-mediated harm. It challenges the tech-centric approach and advocates for community-based, preventive solutions rooted in existing social structures.
Impact
This intervention brought the discussion full circle, grounding the technical and regulatory focus back in human relationships and community structures. It emphasized prevention over reaction and highlighted the importance of offline interventions for online safety.
Overall assessment
These key comments fundamentally shaped the discussion by challenging conventional approaches to online safety and introducing new analytical frameworks. The conversation evolved from a focus on reactive measures (content takedowns, legislation) to proactive, holistic approaches that consider behavioral psychology, cultural context, and community-based solutions. The comments revealed the limitations of current regulatory frameworks and highlighted the need for coordinated, multi-stakeholder responses that address both the technical and human dimensions of online harm. Most significantly, they exposed the global inequities in how online safety is implemented and experienced, pushing the discussion toward more inclusive and comprehensive solutions.
Follow-up questions
How can we develop interventions and safety mechanisms for platforms that don’t prioritize smaller countries with multiple local languages?
Speaker
Neema Iyer
Explanation
This addresses the challenge of platform governance in regions with linguistic diversity and smaller market shares, where safety mechanisms may not be adequately developed or localized
How can we develop broader legislative frameworks that address coordinated disinformation campaigns and ideological radicalization of minors online, beyond just intimate image sharing?
Speaker
Neema Iyer
Explanation
Current legislative frameworks are often too narrow and don’t address the full spectrum of online harms faced by marginalized communities
What are the specific criteria and thresholds for determining what constitutes ‘glorifying’ terrorist content or ‘call to action’ in content moderation?
Speaker
Arda Gerkens
Explanation
This is needed to clarify vague legislation and create consistent standards across European countries for terrorist content removal
How can we develop coordinated approaches to tackle hybrid threats that combine terrorism, child sexual abuse material, and targeting of vulnerable children across different regulatory domains?
Speaker
Arda Gerkens
Explanation
There’s an emerging trend of hybridization where terrorist groups are using CSAM and targeting vulnerable children, requiring cross-domain collaboration
What actions are platforms taking against scammers who sponsor harmful posts, beyond just content takedown?
Speaker
Teo Nie Ching
Explanation
There’s a lack of transparency about platform accountability measures against bad actors, not just their content
How much advertising revenue do major platforms collect from individual countries or regions like ASEAN?
Speaker
Teo Nie Ching
Explanation
This information is needed to understand the economic leverage that could be used in platform negotiations
How can we establish international standards for platform responsibilities instead of individual countries negotiating separately?
Speaker
Teo Nie Ching
Explanation
Individual countries lack sufficient negotiation power with tech giants, requiring coordinated international approaches
What happens to millions of women in the Global South who face online harm but never make headlines or receive media attention?
Speaker
Nighat Dad
Explanation
Platform response systems often only work when media pays attention, leaving many victims without recourse
How can we design algorithmic decisions through co-creation involving governments, civil society, and platforms rather than leaving them to platform owners’ ideologies?
Speaker
Neema Iyer
Explanation
Current algorithmic decisions reflect the moral and political ideologies of platform owners, requiring more democratic input
How can we introduce design friction to prevent harmful content from being shared in the first place, rather than relying on reactive takedown measures?
Speaker
Neema Iyer
Explanation
Proactive prevention through design changes could be more effective than reactive content moderation
How can we better utilize existing community structures (schools, clubs, families) to empower potential victims before they encounter online threats?
Speaker
John Kiariye
Explanation
Focusing on offline preparation and community-based solutions could complement technical approaches to online safety
What are the most effective behavioral economics interventions and nudges that can be implemented by platforms to promote safer online behavior?
Speaker
Sandra Maximiano
Explanation
Understanding and applying behavioral insights could help design more effective safety measures that work with human psychology rather than against it
How can regional blocs like ASEAN develop coordinated standards for platform regulation that reflect their cultural and religious contexts?
Speaker
Teo Nie Ching
Explanation
Regional coordination could provide more negotiating power and culturally appropriate standards than individual country approaches
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.