The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28
Event report
Speakers and Moderators
Speakers:
- Dr. Sarah Myers West, AI Now
- Udbhav Tiwari, Mozilla
- Riana Pfefferkorn, Stanford Internet Observatory
- Eliska Pirkova, Access Now
Moderators:
- Namrata Maheshwari, Access Now
- Daniel Leufer, Access Now
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Sarah Myers West
The analysis provides a comprehensive overview of the concerns and criticisms surrounding artificial intelligence (AI). One notable concern is that AI can be misleading and misunderstood, leading to flawed policies. It is argued that in the field, there is a tendency to make claims about AI without proper validation or testing, which undermines trust in the technology.
At present, AI is primarily seen as a computational process that applies statistical methods to large datasets. These datasets are often acquired through commercial surveillance or extensive web scraping. This definition emphasizes the reliance on data-driven approaches to derive insights and make predictions. However, the ethical implications of this reliance on data need to be considered, as biases and inequalities can be perpetuated and amplified by AI systems.
The lack of validation in AI claims is another cause for concern. Many AI systems are said to serve specific purposes without undergoing rigorous testing or validation processes. Discrepancies and problems often go unnoticed until auditing or other retrospective methods are employed. The absence of transparency and accountability in AI claims raises questions about the reliability and effectiveness of AI systems in various domains.
Furthermore, it is evident that AI systems have the potential to mimic and amplify societal inequality. Studies have shown that AI can replicate patterns of discrimination and exacerbate existing inequalities. Discrimination within AI systems can have adverse effects on historically marginalised populations. This highlights the importance of considering the social impact and ethical implications of AI deployment.
In terms of content moderation, AI is often seen as an attractive solution. However, it is acknowledged that it presents challenges that are difficult to overcome. For example, AI-based content moderation systems are imperfect and can lead to violations of privacy as well as false positive identifications. Malicious actors can also manipulate content to bypass these AI systems, raising concerns about the effectiveness of AI in tackling content moderation issues.
To address these concerns, there is a need for more scrutiny and critical evaluation of the use of AI in content moderation. Establishing rigorous standards for independent evaluation and testing is crucial to ensure the effectiveness and ethical use of AI technology. This approach can help mitigate the risks associated with privacy violations, false positives, and content manipulation.
In conclusion, the analysis underscores the importance of addressing the concerns and criticisms related to AI. The potential for misrepresentation and flawed policies, the lack of validation and transparency in AI claims, the amplification of societal inequality, and the challenges in content moderation highlight the need for thoughtful and responsible development and deployment of AI technologies. Ethical considerations, rigorous testing, and ongoing evaluation should be central to AI research and implementation to ensure that the benefits of AI can be realized while mitigating potential harms.
Audience
During the discussion on child safety in online environments, several speakers emphasised the necessity of prioritising the protection of children from harm. They stressed the importance of distinguishing between general scanning or monitoring and the specific detection of harmful content, particularly child sexual abuse material (CSAM). This distinction highlighted the need for targeted approaches and solutions to address this critical issue.
The use of artificial intelligence (AI) and curated algorithms to identify CSAM content received support from some participants. They mentioned successful implementations in various projects, underlining the potential effectiveness of these advanced technologies in detecting and combating such harmful material. Specific examples were provided, including the use of hashing techniques for verification processes, the valuable experience of hotlines, and the use of AI in projects undertaken by the organisation InHope.
However, concerns were raised regarding the potential misuse of child safety regulations. There was apprehension that such regulations might extend beyond the intended scope, impeding on other important areas, such as encryption and combating counterterrorism. It was stressed that policymakers should be wary of unintended consequences and not let child safety regulations become a slippery slope for encroaching on other narratives or compromising important tools like encryption.
The participants also emphasised the significance of online safety for everyone, including children, and the need to prioritise this aspect when developing online solutions. Privacy concerns and the protection of personal data were seen as vital considerations, and transparency in online platforms and services was highlighted as a crucial element in building trust and safeguarding users, particularly children.
The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called for greater transparency in these systems, expansion to other regions, and better differentiation between various types of technology. They stressed that a comprehensive approach was required, involving not only the use of targeted technology but also education, safety measures, and addressing the root causes by dealing with perpetrators.
There were also concerns voiced about law enforcement’s use of surveillance tools in relation to child safety. Instances of misuse or overuse of these tools in the past created a lack of trust among some speakers. An example was provided where a censorship tool in Finland resulted in the hacker compiling about 90% of the secret list of censored websites, revealing that less than 1% contained actual child sex abuse material.
In conclusion, the discussion on child safety in online environments highlighted the need to differentiate between general scanning and scanning for specific harmful content. It emphasised the importance of targeted approaches, such as the use of AI and curated algorithms, to detect child sexual abuse material. However, concerns were raised about the potential misuse of regulations, particularly in the context of encryption and other narratives like counterterrorism. The protection of online safety for everyone, the improvement of existing systems, and a comprehensive approach involving technology, education, and safety measures were identified as crucial elements in effectively protecting children online.
Namrata Maheshwari
The discussion revolves around the crucial topic of online safety and privacy, with a specific emphasis on protecting children. While there may be various stakeholders with different perspectives, they all share a common goal of ensuring online safety for everyone. The conversation acknowledges the challenges and complexities associated with this issue, aiming to find effective solutions that work for all parties involved.
In line with SDG 16.2, which aims to end abuse, exploitation, trafficking, and violence against children, the discussion highlights the urgency and importance of addressing online safety concerns. It acknowledges that protecting children from online threats is not only a moral imperative but also a fundamental human right. The inclusion of this SDG demonstrates the global significance of this issue and the need for collective efforts to tackle it.
One notable aspect of the conversation is the recognition and respect given to the role of Artificial Intelligence (AI) in detecting child sexual abuse material (CSAM). Namrata Maheshwari expresses appreciation for the interventions and advancements being made in this area. The use of AI in detecting CSAM is a critical tool in combating child exploitation and safeguarding children from harm.
The conversation highlights the need for collaboration and cooperation among various stakeholders, including government authorities, tech companies, educators, and parents, to effectively address online safety concerns. It emphasizes the shared responsibility in creating a safe online environment for children, where their privacy and security are protected.
Overall, this discussion underscores the significance of online safety and privacy, particularly for children. It highlights the importance of aligning efforts with global goals, such as SDG 16.2, and recognizes the positive impact that technology, specifically AI, can have in combating online threats. By working together and adopting comprehensive strategies, we can create a safer and more secure digital space for children.
Udbhav Tiwari
The analysis conducted on content scanning and online safety highlights several significant points. One of the main findings is that while it is technically possible to develop tools for scanning certain types of content, ensuring their reliability and trustworthiness is a difficult task. Platforms already perform certain forms of scanning for unencrypted content. However, Mozilla’s experience suggests that verifying the reliability and trustworthiness of such systems poses challenges. Currently, no system has undergone the level of independent testing and rigorous analysis required to ensure their effectiveness.
Another concerning aspect of content scanning is the involvement of governments. The analysis reveals that once technological capabilities exist, governments are likely to leverage them to detect content deemed worthy of attention. This raises concerns about the potential misuse of content scanning technology for surveillance purposes. Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this is seen in the implementation of separate technical infrastructures for iCloud due to government requests. Therefore, the law and policy aspect of content scanning can be more worrying than the technical feasibility itself.
The importance of balancing the removal of harmful content with privacy concerns is emphasized. Mozilla’s decision not to proceed with scanning content on Firefox Send due to privacy concerns demonstrates the need to find a middle ground. The risk of constant content scanning on individual devices and the potential scanning of all content is a significant concern. Different trust and safety measures exist for various use cases of end-to-end encryption.
The analysis brings attention to client-side scanning, which already exists in China through software like Green Dam. It highlights the fact that the conversation surrounding client-side scanning worldwide is more nuanced than commonly acknowledged. Government measures and regulations pertaining to client-side scanning often go unnoticed on an international scale.
Platforms also need to invest more in understanding local contexts to improve enforcement. The study revealed that identifying secret Child Sexual Abuse Material (CSAM) keywords in different languages takes platforms years, suggesting a gap in their ability to effectively address the issue. Platforms have shown a better record of enforcement in English than in the global majority, indicating a need for more investment and understanding of local contexts.
The issue of child sexual abuse material is highlighted from different perspectives. The extent to which child sexual abuse materials are pervasive depends on the vantage point. The analysis reveals that actors involved in producing or consuming such content often employ encrypted communication or non-online methods, making it difficult to fully grasp the magnitude of the problem. Further research is needed to understand the vectors of communication related to child sexual abuse material.
Finally, the analysis stresses that users have the ability to take action to address objectionable content. They can report such content on platforms, directly involve law enforcement, or intervene at a social level by reaching out to the individuals involved. Seeking professional psychiatric help for individuals connected to objectionable content is also important.
In conclusion, the analysis of content scanning and online safety identifies various issues and concerns. It emphasizes the need to balance the removal of harmful content with privacy considerations while cautioning against potential government surveillance practices. Furthermore, the study underscores the importance of understanding local contexts for effective enforcement. The issue of child sexual abuse material is found to be complex, requiring further research. Finally, users are encouraged to take an active role in addressing objectionable content through reporting, involving law enforcement, and social intervention.
Eliska Pirkova
The analysis of the arguments reveals several important points regarding the use of technology in different contexts. One argument highlights the potential consequences of using AI tools or content scanning in encrypted environments, particularly in crisis-hit regions. The increasing use of such technologies, even in democracies, is a cause for concern as they can only identify known illegal content, leading to inaccuracies.
Another argument raises concerns about risk-driven regulations, suggesting that they might weaken the rule of law and accountability. The vague definition of ‘significant risks’ in legislative proposals is seen as providing justification for deploying certain technologies. The need for independent judicial bodies to support detection orders is emphasized to ensure proper safeguards.
Digital platforms are seen as having a significant role and responsibilities, particularly in crisis contexts where the state is failing. They act as the last resort for protection and access to remedies. It is crucial for digital platforms to consider the operational environment and the consequences of complying with government pressures.
The pending proposal by the European Union (EU) on child sexual abuse material is seen as problematic from a rights perspective. It disproportionately imposes measures on private actors that can only be implemented through technologies like client-side scanning. This raises concerns about potential violations of the prohibition of general monitoring.
Similar concerns are expressed regarding the impact of the EU’s ongoing, still-negotiated proposal in relation to the existing digital services act. If the proposal remains in its current form, there could be direct violation issues. The argument also suggests that the EU’s legitimization of certain tools could lead to their misuse by other governments.
The global implications of the EU’s regulatory approach, known as the Brussels effect, are also discussed. Many jurisdictions worldwide have followed the EU’s approach, which means that well-intentioned measures may be significantly abused if they end up in inappropriate systems.
The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal. However, differing means, policy approaches, and regulatory solutions may generate counterproductive debates when critical views towards technical solutions are dismissed.
In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts, particularly concerning online security, accountability, and rights protection. Dialogue and negotiations among stakeholders are crucial to understand different perspectives and reach compromises. Inclusive and representative decision-making processes are essential for addressing the challenges posed by technology.
Riana Pfefferkorn
The analysis explores various arguments and stances on the contentious issue of scanning encrypted content. One argument put forth is that scanning encrypted content, while protecting privacy and security, is currently not technically feasible. Researchers have been working on this problem, but no solution has been found. The UK government has also acknowledged this limitation. This argument highlights the challenges of striking a balance between enforcing online safety regulations and maintaining the privacy and security of encrypted content.
Another argument cautions against forced scanning of encrypted content by governments. This argument emphasizes that such scanning could potentially be expanded to include a wide range of prohibited content, jeopardizing the privacy and safety of individuals and groups such as journalists, dissidents, and human rights workers. It is argued that any law mandating scanning could be used to search for any type of prohibited content, not just child sex abuse material. The risk extends to anyone who relies on secure and confidential communication. This argument underscores the potential negative consequences of forced scanning on privacy and the free flow of information.
However, evidence suggests that content-oblivious techniques can be as effective as content-dependent ones in detecting harmful content online. Survey results support this notion, indicating that a content-oblivious technique was considered equal to or more useful than a content-dependent one in almost every category of abuse. User reporting, in particular, emerged as a prevalent method across many abuse categories. This argument highlights the effectiveness of content-oblivious techniques and user reporting in identifying and mitigating harmful online content.
Furthermore, it is argued that end-to-end encrypted services should invest in robust user reporting flows. User reporting has been found to be the most effective detection method for multiple types of abusive content. It is also seen as a privacy-preserving option for combating online abuse. This argument emphasizes the importance of empowering users to report abusive content and creating a supportive environment for reporting.
On the topic of metadata analysis, it is noted that while effective, this approach comes with significant privacy trade-offs. Metadata analysis requires services to collect and analyze substantial data about their users, which can intrude on user privacy. Some services, such as Signal, purposely collect minimal data to protect user privacy. This argument highlights the need to consider privacy concerns when implementing metadata analysis for online content moderation.
The analysis concludes by emphasizing the need for both advocates for civil liberties and governments or vendors to recognize and acknowledge the trade-offs inherent in any abuse detection mechanism. There is no abuse detection mechanism that is entirely beneficial without drawbacks. It is crucial to acknowledge and address the potential negative consequences of any proposed solution. This conclusion underscores the importance of finding a balanced approach that respects both privacy and online safety.
The analysis also discusses the challenging practical implementation of co-equal fundamental rights. It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single right taking precedence over another. The difficulty lies in effectively implementing this principle in practice, particularly in contentious areas like child safety.
Furthermore, the analysis highlights the importance of holding governments accountable for maintaining trustworthiness. It is argued that unrestricted government access to data under the guise of child safety can exceed the necessity and proportionality required in a human rights-respecting framework. Trustworthiness of institutions hinges on the principle of government accountability.
In summary, the analysis provides insights into the complications surrounding the scanning of encrypted content and the trade-offs associated with different approaches. It emphasizes the need for a balanced approach that considers privacy, online safety, and fundamental rights. Acknowledging the limitations and potential risks associated with each proposed solution is crucial for finding effective and ethical methods of content moderation.
Session transcript
Namrata Maheshwari:
Next, the organizers to help us see the speakers on the screen. The two speakers joining online. Oh, there we go. Hi. Just to check, Rihanna, Sarah, can you hear us? Yes. Great. Do you want to try saying something so we can check if your audio is working? Hi. Can you hear me? Yes. I can hear you. Great. Can you hear me? Yes, we can. Thank you. All right. So, Udbhav will be joining us shortly, but maybe we can start just to make the most of time. My name is Namrata Maheshwari. I’m from Access Now, an international digital rights organization. I lead our global work on encryption, and I also lead our policy work in South Asia. I have the relatively easy task of moderating this really great panel, so I’m very excited about it, and I hope we’re able to make it as interactive as possible, which is why this is a roundtable. So, we’ll open it up, hopefully halfway through, but definitely for the last 20 minutes. So, if you have any questions, please do note them down. Well, quick introduction, and then maybe I’ll do some context setting. I’ll start with Ilishka Pirkova on my left, who is also my colleague from Access Now. She is Senior Policy Analyst and Global Freedom of Expression Lead, and as a member of the European team, she leads our work on freedom of expression, content governance, and platform accountability. Thank you so much for being here. I will introduce Udbhav anyway while we wait for him to come here. He is the Head of Global Product Policy at Mozilla, where he focuses on cybersecurity, AI, and connectivity. He was previously at the Public Policy Team at Google, and Non-Resident Scholar with Carnegie Endowment. And online, we have Rihanna and Sarah. Rihanna Fafikorn is a Research Scholar at the Stanford Internet Observatory. A lawyer by training, her work focuses on encryption policy in the US and other countries, and related fields such as privacy, surveillance, and cybersecurity. Sarah Myers West is the Managing Director of AI Now Institute, and recently served a term as a Senior Advisor on AI at the Federal Trade Commission. She holds a decade of experience in the field of the political economy of technology, and her forthcoming book, Tracing Code, examines the origins of commercial surveillance. Thank you so very much. These are people who I believe have played a very important role in shaping the discourse around encryption and AI in recent times. So thank you so much for lending your insights and expertise to this panel, and thank you all for sharing your time with us here today. Well, we’re seeing a lot of proposals across the world in different regions on AI and encryption. So this session really is an effort to shed some light on the intersections between the two, which we think lie within the content scanning proposals that we’re seeing in different countries, US, UK, EU, India, and Australia, a lot of others. These proposals mainly suggest scanning content of messages on encrypted platforms, and proponents say that there is a way to do this in a way that would not undermine privacy and help eliminate harmful material. And opponents say that there is an over-reliance on AI, because the tools that would need to be developed to scan this content are AI tools, automated scanning tools, which are prone to biases, prone to, well, false outputs, and also that it would undermine privacy and erode into an encryption as we know it. So I’m hoping that the speakers on this panel can tell us more about it. With that, I’ll get us started. Just some housekeeping. Online, we have my colleague, Reits, moderating. So whoever is joining online, if you have questions, drop them in chat, and Reits will make sure we address those. Rihanna, if I could start with you. Proposals in many countries to moderate content on encrypted platforms are premised on the idea that it is possible to do this without undermining privacy. Could you tell us a little bit more about what the merits of this are, what the real impact is on encryption, and on the user groups that use these platforms, including the groups that these proposals seek to protect?
Riana Pfefferkorn:
Sure. So there’s a saying in English, which is that you want to have your cake and eat it, too. And that’s what this idea boils down to, the idea that you can scan encrypted content to look for bad stuff, but without breaking or weakening into an encryption, or otherwise undermining the underlying privacy and security guarantees intended for the user. We just don’t know how to do this yet, and that’s not for lack of trying. Computer security researchers have been working on this problem, but they haven’t yet figured out a way to do this. So the tools don’t exist yet, and it’s doubtful that they will, at least in a reasonable timeframe. You can’t roll back encryption to scan for just one particular type of content, such as child sex abuse material, which is usually what governments want end-to-end encrypted apps to scan for. If you’re trying to scan for one type of content, you have to undermine the encryption for all of the content, even perfectly innocent content. And that defeats the entire purpose of using end-to-end encryption, which is making it so that nobody but the sender and the intended recipient can make sense of the encrypted message. This has been in the news lately because, perhaps most prominently, the United Kingdom government has been pretending that it’s possible to square this particular circle. Basically, the UK has been one of the biggest enemies of strong encryption for years now, at least among democracies. It’s been trying to incentivize the invention of tools that can safely scan encrypted content through government-sponsored tech challenges, and it just passed a law, the Online Safety Bill, that engages in the same magical thinking that this is possible. The issue here is that, like I said, there isn’t any known way to scan encrypted content without undermining privacy and security. And nevertheless, this new law in the UK gives their regulator for the internet and telecommunications the power to serve compulsory notices on encrypted app companies, forcing them to try and do just that. The regulator has now said, actually, OK, we won’t use this power because they’ve basically admitted that there just isn’t a way to do this yet. They say, we won’t use that power until it becomes technically feasible to do so, which might effectively be quite a while because we don’t have a way of making this technically feasible. And part of the danger of having this power in the law is that it’s premised upon the need to scan for child sex abuse material. But there isn’t really any reason that you couldn’t expand that to whatever other type of prohibited content the government might want to be able to find on the service, which might be anything that’s critical of the government. It might be les majestés. It might be content coming from a religious minority, et cetera. And so requiring companies to scan by undermining their own encryption for whatever content the government says they have to look for could put journalists at risk, dissidents, human rights workers, anybody who desperately needs their communications to stay confidential and impervious to outside snooping by malicious actors, which might be your own government, might be somebody else who has it in for you, even in cases of domestic violence, for example, or child abuse situations within the home. So we’ve seen some at least positive moves in this area in terms of a lot of public pushback and outcry over this. Several of the major makers of encrypted apps, including Signal, WhatsApp, which is owned by Meta, and iMessage, which is owned by Apple, have threatened to just walk away from the UK market entirely rather than comply with any compulsory notice telling them that they have to scan encrypted material for child sex abuse content. So I take that as a positive sign that not only some of the major makers of these apps are saying that isn’t something that we could do, and that they’re saying we would rather just walk away rather than undermine what our users have come to expect from us, which is the level of privacy and security that end-to-end encryption can guarantee.
Namrata Maheshwari:
Thank you, Rihanna. Sarah, if you could just zoom out a bit for a second. And there have been a lot of thoughts about how artificial intelligence is a misleading term, and it could lead to flawed policies based on a misrepresentation of the kind of capabilities that the technology has. Do you think there is a better term for it? And if so, well, what would it be? And the second limb of the question was, again, there have been a lot of studies and debates around the inherent biases and flaws of AI systems. So if these were to be implemented within encrypted environments, which one of these characteristics, or if that’s true, if that is something that would happen, would these be transferred to encrypted platforms in a way that would lead to, well, unique consequences?
Sarah Myers West:
Sure, it’s a great question. I think it is worth taking a step back and really pinning down what it is that we mean by artificial intelligence, because that’s a term that has meant many different things over an almost 70-year history. And it’s one that’s been particularly value-laden in recent policy conversations. In the current state of affairs… No worries, maybe we can come back to Sarah once she rejoins. Ritz, could you let me know when Sarah’s back online? Oh, she’s back. Okay. Hi, Sarah. I’m back. Yes, sorry about that. What I was about to say was, what we sort of mean by artificial intelligence in the present-day moment is, you know, there are a lot of people who think that, you know, what we sort of mean by artificial intelligence in the present-day moment is the application of statistical methods to very large data sets, data sets that are often produced through commercial surveillance or through, you know, massive amounts of web scraping and sort of mining for patterns within that massive amount of data. So, it’s essentially, you know, a foundationally computational process. But really, you know, what Rihanna was talking about here was sort of surveillance by another means. And I think a lot of ideals get imbued onto what AI is capable of that don’t necessarily bear out in practice. You know, the FTC has recently described artificial intelligence as, you know, largely a marketing term. And there’s a frequent tendency in the field to see claims about AI being able to, you know, serve certain purposes that lack any underlying validation or testing where, you know, within the field, you know, benchmarking standards may vary widely. And very often companies are able to make claims about the capabilities of the systems that don’t end up bearing out in practice. We sort of discover them through auditing and other methods after the fact. And to that point, you know, given that AI is essentially grounded in pattern matching, there is a, you know, very well documented phenomenon in which artificial intelligence is going to mimic patterns of societal inequality and amplifying them at scale. So, we see widespread patterns of, you know, discrimination within artificial intelligence systems in which, you know, the harms accrue to populations that have historically been discriminated against and the benefits accrue to those who have experienced privilege. And that AI is sort of broadly being claimed to be some magical solution, but not necessarily with, you know, robust independent checks that it will actually work as claimed.
Namrata Maheshwari:
Thank you. Ilishka, given your expertise on content governance and the recent paper you led on content governance in times of crisis, could you tell us a bit about the impact of introducing AI tools or content scanning in encrypted environments in regions that are going through crisis?
Eliska Pirkova:
Sure. Thank you very much. And maybe I also would like to start from a sort of a content governance perspective and what we mean by the danger when it comes to client-side scanning and weakening encryption, which is the main precondition for security and safety within the online environment, which of course becomes even more relevant when we speak about the regions impacted by crisis. So, but unfortunately, the internet is not the only place where we have to think about but unfortunately, these technologies are spreading also in democracies across the world and legislators and regulators increasingly sell that idea that they will provide these magical solutions to ongoing very serious crimes such as child sexual abuse materials. And I will get to that. This also concerns other types of illegal content such as terrorist content or potentially even misinformation and disinformation that is spreading online on encrypted spaces such as WhatsApp or other private messaging apps. So, of course, there are a number of questions that must be raised when we discuss content moderation and content moderation has several phases. It starts with the detection of the content, evaluation, assessment of the content, and then consequently, ideally, there should be also provided some effective access to remedy once there is the outcome of this process. And when we speak about end-to-end encryption violation and client-side scanning, the most kind of worrisome state is precisely detection of the content where these technologies are being used. And one very important, and this is usually done through different using hash technologies, different types of these technologies, photo DNA is quite known. And, of course, these technologies, and I very much like what Sarah mentioned, it’s quite questionable whether we can even label them as artificial intelligence. I would rather go for machine learning systems in that regard. And what is very essential to recognize here is that these technologies simply scan the content and they are usually used for identifying the content that was already previously identified as illegal content, depending on the category they are supposed to identify. So then they trace either identical or similar enough content to that one that was already captured. And the machine learning system as such cannot particularly distinguish whether this is a harmful content, whether this is illegal content, whether this is the piece of disinformation. Because this content, of course, doesn’t have any technical features per se that would precisely provide this sort of information, which ultimately results into a number of false positives and negatives and errors of these technologies that impose serious consequences on fundamental rights protection. So what I find particularly worrisome in this debate increasingly, and that’s also very much relevant regarding the regions impacted by crisis, is the impact on significant risk and justifications that these type of technologies can be deployed if there is a significant risk to safety or to other significant risks that are usually very vaguely defined in these legislative proposals that are popping across the world. And if we have these risk-driven, I don’t want to call it ideology, but trend behind the regulation, then of course what will be significantly decreased is precisely the requirements for the rule of law and accountability, such as that, for instance, these detection orders should be fully backed up by the independent judicial bodies, and they should be the ones who should actually decide whether something like that is necessary and conduct that initial assessment. And when we finally put it in the context of crisis, of course in times when the rule of law is weakened, either by authoritarian regime in power that seeks to use these technologies to crack down on decent human rights activists and defenders, and we as AXIS now, being a global organization, we see that all over again, that this is primary goal of these type of regulations and legislations, or also these regimes being inspired by the democratic Western world where these regulations are also probably profiling more and more, then of course consequences can be fatal. sensitive information can be obtained about human rights activists as a part of the broader surveillance campaign and it also means that under such contexts where the state is failing and the state is the main perpetrator of violence in times of crisis it is the digital platform and private companies who often act as a last resort of protection and access to any sort of remedy and under those circumstances not only that their importance increases but so do their obligations and their responsibility to actually get it right and that of course contains the due diligence obligation so understanding what kind of environment they operate in and what they what is technically feasible and what is actually consequence if they for instance comply with the pressure and wishes of the government in power which we often see especially when it comes to informal cooperation between the government and platform that was a lot so I’ll stop here thank you.
Namrata Maheshwari:
Thank you. Our fourth speaker Udbhav Tiwari is having some trouble with his badge at the entrance I don’t know if the organizers can help with that at all but he’s at the venue but just having trouble getting a copy so just a request in case you are able to help no worries if not he’ll be here shortly but in the meantime we can keep the session going Rihanna I’d like to come back to you a lot of the conversations and debates on this subject revolve around the very important question of well what are the alternatives there are very real challenges in terms of online safety harmful material online and very real concerns around privacy and security so the question is if not content scanning then what in that context could you tell us more about your research on content oblivious trust and safety techniques and whether you think there are any existing or potential privacy preserving alternatives.
Riana Pfefferkorn:
Sure so I published research in Stanford’s own Journal of Online Trust and Safety in early 2022 there’s a categorization that I did in this research which is content dependent versus content oblivious techniques for detecting harmful content online content dependent means that that technique is requires at will access by the platform to the contents of user data so some examples would be automated scanning for the DNA as an example or human moderators who go to look for content that violates the platform’s policies against abusive uses I would also include content client-side scanning as at least I was describing as a content dependent technique because it’s looking at the contents of messages before it gets encrypted and transmitted to the recipient content oblivious by contrast means that the trust and safety technique doesn’t need at will access to message contents or file contents in order to work so examples would be analyzing data about a message rather than the contents of a message so metadata analysis as well as analysis of behavioral signals how is this user behaving even if you can’t see the contents of their messages another example would be user reporting of abusive content because they’re the reason that the platform gets access to the contents of something isn’t because they had the ability to go and look for it’s because the user chose to report it to the to the platform itself so I conducted a survey in 2021 of online service providers which included both with end-to-end encrypted apps as well as other non EDE types of online services and I asked them what types of trust and safety techniques they use across 12 different categories of abusive content from child safety crimes to hate speech to spam to missing disinformation and so on and I asked them which of three techniques I made content scanning which is content dependent and metadata analysis and user reporting which are content oblivious did they find most useful for detecting each of those 12 different types of abusive content and what I found was that for almost every category a content oblivious technique was deemed to be as or more useful than a content dependent one specifically user reports in particular prevailed across many categories of abuse I asked about the only exception was child sex abuse material where automated scanning was deemed to be the most useful so meaning things like for the DNA these findings indicate that and then encrypted services ought to be investing in making robust user reporting flows ideally ones that expose as little information about the conversation as possible apart from the abusive incident I find user reporting to be the most privacy preserving option for fighting online abuse plus once you have a database of user reports you could apply machine learning techniques to users or groups across your service if you want to look for some trends without necessarily searching across the entire database of all content on the platform another option metadata analysis in my survey that didn’t fare as well as user reporting in terms of the usefulness as perceived by the providers but that was a couple of years ago and even then the use of AI and ML were already helping to detect abusive content so those tools surely have room to improve I do want to mention though like it’s important to recognize that there are trade-offs to any of the proposals that we might come up with metadata analysis has major privacy trade-offs compared to user reporting because the service has to collect and analyze enough data about its users to be able to do that kind of approach there are some services like signal that choose to collect extremely minimal data about their users as part of their commitment to user privacy so when we’re talking about trade-offs trade-offs might be inaccuracy there might be false positive rates or false negative rates associated with a particular option privacy intrusiveness what have you there’s no abuse detection mechanism that is all upside and no downside we can’t let governors or vendors pretend otherwise and especially when it comes to pretending that you’re going to have all of the upside without any kind of trade-offs whatsoever which is what I see commonly used like oh yeah it’s worth these privacy trade-offs or these security trade-offs because we’re going to realize this upside well that’s not necessarily guarantee but at the same time I think that as advocates for civil liberties for human rights for strong encryption it’s important for us not to pretend that the things that we advocate as alternatives don’t also have their own trade-offs there’s a great report that CDT published in 2021 that looked at a bunch of different approaches called from the outside looking in it’s also a great resource for looking at the different sorts of options in the end and encrypted versus versus you know have the tension between doing trust and safety and how to continue respecting strong encryption
Namrata Maheshwari:
great above will come to you now a lot of proposals again on content scanning are premised on the admittedly well-intentioned goal of wanting to eliminate harmful material online from a product development perspective do you think it is possible to develop tools that are limited to scanning certain types of content and looking at the cross-border implications as well from a platform that provides services in various regions what do you think the impact of implementing such abilities in one region beyond other regions with different kinds of governments and contexts thanks number that I think that
Udbhav Tiwari:
the first kind of angle to with which to look at it is whether it’s technically feasible or not and the second is whether it’s feasible in law and policy and I think the both of them are two different answers purely on the technical feasibility perspective it depends on how one decides to define client-side scanning and and what constitutes client-side scanning or not but there are different ways in which platforms already do certain kinds of scanning for unencrypted content that some of them claim can be done for encrypted content in a way that is reliable but personally speaking and also from Mozilla’s own experiences as we’ve evaluated them it’s quite difficult to take any of those claims on face value because almost none of these systems when they do claim to only detect a particular piece of content well have gone I think undergone the level of independent testing and rigorous analysis that is required for those claims to actively be verified by the like by the rest of either the security community or the community that generally works in trust and safety like Rihanna was talking about and the second aspect which is the law and policy aspect is I think the more worrying concern because it’s very difficult to imagine a world in which we deploy these technologies for a particular kind of content presuming it meets the really high bar of being reliable and trustworthy and also somehow privacy-preserving the legal challenges that it creates don’t end with it existing but of how other governments may be inspired by these technological capabilities existing in the first place and that’s because once these technological capabilities exist various governments would want to utilize it for whatever content they may deem worth detecting at a given point in time and that means that what may be CSAM material in one country may be CSAM material and terrorist content in another country and in a third country it may be CSAM material, terrorist content and content that maybe is critical say of the political ruling class in that particular country as well and as I think if there’s one thing that we’ve seen with the way that the internet has developed over the last 20 to 25 years it’s that the ability of companies and especially large companies to be able to resist requests or directives from governments has only reduced over time. The incentives against them standing up against governments are like very very aligned towards them just complying with governments because it’s simply much easier for you from a business perspective to be able to just if a government places pressure upon you over an extended period of time to just give in to certain requests and we’ve already seen examples of that happen with other services that are ostensibly parts of which are end-to-end encrypted such as iCloud where in certain jurisdictions they have separate technical infrastructures that they’ve set up because of requests from governments as well so if it has started happening there I see it I think it’s very difficult to see a world in which we won’t see it happening for client-side scanning and these kinds of content as well. One other thing that I will say that and that’s especially from a product development perspective is Mozilla has also actually had some experience with this and the challenges that come with deploying end-to-end encrypted services and deciding to do them in a privacy-preserving manner but not having and not collecting metadata which was this service called Firefox Send which Firefox and Mozilla had originally created a couple of years ago to be able to allow users to share files easily and anonymously so you went on to a portal it had a very low limit you could upload a file onto the portal and then once you uploaded a file onto a portal you got a link and then the individual could click on the link and then you could download it and this the service worked reasonably well for a couple of years but what we realized I think towards the end of its lifespan was that there were also some use cases in which it was being used by malicious actors to actively deploy harmful content in some cases malware in some cases like materials that otherwise would be implicated in investigations and once we evaluated whether we could deploy mechanisms that would either scan for such content on devices which in our case was the browser which has even less of a possibility of doing such actions we decided that it was better for that piece of software to not exist rather than for it to create the risks that it did for users without the trust and safety options that were available to us because it was end-to-end encrypted so that’s also I think a nod towards the fact that there are different streams and levels of use cases to which end-to-end encryption can be deployed and different kinds of trust and safety measures that could be deployed to like account for different like threat vectors if you would like to call them that way and the ones that we’re specifically talking about which is client-side scanning most is most popular right now for messaging but the way that it’s actually been deployed in the past or almost been deployed in the past by say a company like Apple was the closest was actually scanning in from all information that would be present on a device before it would be backed up so there is and that’s the final point that I’m making that there’s also this implication that we are presuming this is a technology that will only scan your content after you hit the send button or when you’re hitting the send button but in most of the ways in which it’s actually been deployed it’s been deployed in a way where it proactively would scan individuals devices to detect content before it before it is backed up or uploaded onto a server in some form and that’s a very very thin line to walk between doing that and just keeping it and just like scanning content all the time in order to detect whether there’s something that shouldn’t exist on the device and that’s a very scary possibility
Namrata Maheshwari:
thanks above is Sarah online rates I I believe she’s had an emergency but that’s fine I will come back to her if she’s able to join again Alishka in many ways the EU has been at the center of this discourse or what is also known as the Brussels effect we see a lot of policy proposals debates and discourses on internet governance and privacy and free expression traveling from the EU to other parts of the world also true for it also happens horizontally across other countries but still in a disproportionate way from EU to elsewhere more recently there have also been proposals around looking at the question of content moderation on encrypted platforms what would you say are the signals from the EU for the rest of the world from a privacy free expression and safety perspective on what to do and what not to do thank you
Eliska Pirkova:
indeed the EU regulatory race has been quite intense in past few years also in the area of content governance and part from accountability specifically in the context of client-side scanning I’m sure many of you are aware of the still pending a proposal on child sexual abuse material it’s the EU regulation which from the fundamental rights perspective is extremely problematic as a part of a dream network there was a tree position paper that contains the main critical points around the legislation and couple of them I’ve already summarized during my first intervention and the whole entire regulation is problematic due to the disproportionate measure it imposes on the private actors from detection order and other measures that can be only implemented through technical solutions such as client-side scanning very short-sighted justifications for the use of these technology very much based on that risk approach that I’ve already explained at the beginning but also ultimately not recognizing and acknowledging that the use of such technology will also violate the prohibition of general monitoring because of course these technology will have to scan the content indiscriminately and I’m mentioning the ban on the general monitoring because if you ask me about the impact of the EU regulation of course another very decisive law in this area was or is digital services act even though digital services act regulates user-generated content disseminated for public if we speak about platforms but to some minimum extent we could say there are some minimum basic requirements for private messaging apps too even though it’s not the main scope of the digital services act but DSA has still a lot to say in terms of accountability transparency criteria and other due diligence measures that these regulation contains and we are really worried about the interplay between these horizontal legislative framework within the EU and the ongoing still negotiated proposal the proposed regulation the child sexual abuse material and if it would stay in its current form and we are really not there yet and of course there would be number of issues that would be in direct violation with the existing digital services act especially those measures that stand on that intersection between these two regulations and of course this sends a very dangerous signal to other governments outside the European Union governments that will definitely abuse these kind of tools especially if a democratic governments within the EU will legitimize the use of such a technology which would ultimately happen and we hope it won’t and there is a significant effort to prevent these regulation from ideally not being adopted at all which is probably at this stage way too late but at least do as much damage control as possible So we have to see how this goes, but of course the emphasis on the regulation within the European Union around digital platforms in general is very strong. There was a number of other laws adopted in recent years and it will definitely trigger this Brussels effect that we saw in the case of the GDPR, but also in case of other member states within the EU, especially in the context of content governance, for instance infamous NetzDG in Germany where Justitia is running the report every year where they clearly show how many different jurisdictions around the world follow this regulatory approach. And if it’s coming directly from the European Union the situation will only get, you know, as much as I believe in some of those laws and regulations, what they try to achieve, everything in the realm of content governance and freedom of expression can be significantly abused if it ends up in the wrong hands and in the system that doesn’t take their constitutional values and rule of law seriously.
Namrata Maheshwari:
Thank you. Udba, my question for you is actually the flip side of my question for Ilishka. Given that so much of this debate is still dominated by regions in the global north, mostly the US, UK and EU, how can we ensure that the global majority plays an active role in shaping the policies and the contexts that are taken into account when these policies are framed? And what do you think tech platforms can do better in that regard?
Udbhav Tiwari:
Thanks, Namita. I think that generally speaking if we were to look at, just for the first maybe minute, in the context of end-to-end encrypted messaging, I would say that probably the only country that already has a law on the books that nobody’s, at least the government doesn’t seem to have made a direct connection between possibilities like client-side scanning and regulatory outcomes is the Indian government. Because India currently has a law in place that gives the government the power to demand the traceability of pieces of content in a manner that still preserves security and privacy. So we’re not very, I don’t think it’s too much of a stretch for, say, a government stakeholder in India to say, why don’t we develop a system where there’s a model running on every device or a hash or a system that scans for certain messages on every device and the government provides hashes and then you need to be able to, like, essentially scan a message before it gets encrypted and reported to us for whether it, and if it is a match, it means that that individual is spreading, you know, messages that are either leading to, like, public order issues or other kinds of misinformation that’s spreading that they want to clamp down on. So, and the reason I kind of raised that, even though traceability is not necessarily a client-side scanning issue, is that I actually think that the conversation is both a lot less nascent in the vast majority of the global majority conversations, but it also has a lot more potential to cause much more harm. And that’s because a lot of these proposals both float under the radar, don’t get as much attention internationally, and ultimately the only thing that holds or protects the individuals in these jurisdictions is the willingness of platforms to comply with those regulations or not. Because so far we, apart from the notable exception of China, where in general there have been systems where the amount of control that the state has had on the internet has been quite different for long enough that there are alternative systems, to the point at which I think that the only known system that I’ve read of that actually has this capability is the Green Dam filter, I think it’s called in China, which does have the ability to, was originally installed on, and I think it’s almost mandatory for it to be present on personal computers, which was originally recommended as a filter for pornographic websites and adult content, but there have been reports that since then it had, that may have reported to governments when people have either searched for certain keywords or gone after or looked for content that may not necessarily be approved at that point as well. And I think that that shows, that showcases that in some places the idea that client-side scanning may not be this hypothetical reality that will exist in the future but may already exist for some time, and that given the fact that we are only relying, for better or for worse, on the will of the platforms to resist such requests before they end up being deployed, I think that the conversation that we need to then start having is, one, what are the ways in which people outside these jurisdictions are actually holding platforms to account for when these measures get passed? So if they do get passed, saying, do you intend to comply with it? If you don’t intend to comply with it, what is your plan for if the government escalates, like, its enforcement actions against you? And as we’ve seen in many countries in the past, like, they can get pretty severe, and ultimately I think this is something that will need to be dealt with at a country-to-country level, not necessarily platform-to-country level, because I think that ultimately, if, depending on the value of the market for the business or for the strength of the market as a geopolitical power, the ability of a platform to resist demands from a government is ultimately limited, and they can try, and some of them do, and many of them don’t, but ultimately it’s something that only international attention and international pressure can reasonably move the needle into. The final point that I’ll make there is, I do think that even when it comes to the development of these technologies, these are still very much, very, like, Western-centric technologies, where a lot of the models that they are trained on, a lot of the information these things are designed on, come from a very different, like, realm of information that may not really match up two pieces in the global majority. I have, like, read of numerous examples outside the end-to-end encrypted context, where, for example, something that a lot of platforms do is that they block certain keywords that are known to be secret keywords for CSAM, which are not very well known, and they vary radically in different jurisdictions, so in order to find it, it may seem like an innocuous word that means something completely different in a local language, but if you search for that, you will find users and profiles that actually CSAM already exists, and just finding out what those keywords are in various local languages, in individual jurisdictions, is something that, like, many platforms take years to do, be able to do well, and that’s not even an end-to-end encrypted or client-side scanning problem, it’s a how much are you investing in understanding local context, how much are you investing in understanding local realities problem, and if that happens there, I think that, like, it’s because those measures fail, it’s because when it comes to unencrypted content, that platforms don’t act quick enough or don’t account for local context enough, that governments also end up resorting to measures like recommending client-side scanning. That’s by no means to say that it’s the fault of these platforms that these measures or these ideas exist, but there’s definitely a lot more that they could do in the global majority to actually deal with the problem on open systems, where they actually have a much better record of enforcement in English and in countries outside the global majority than within the global majority.
Namrata Maheshwari:
Thank you. I have one last question for Sarah, and then we’ll open it up to everybody here, and if anybody is attending online, so please feel free to jump in after that. Sarah, as our AI expert on the panel, what would your response be to government proposals that treat AI as a sort of silver bullet that will solve problems of content moderation on encrypted platforms?
Sarah Myers West:
So, I think one thing that’s become particularly clear over the years is that content moderation is, in many respects, an almost intractable problem. And though AI may present as though a very attractive solution, it’s in many ways, it’s not a straightforward one. And in fact, it’s one that introduces new and likewise troubling problems. AI, for all of its many benefits, it remains imperfect. And there’s a need for considerably more scrutiny on claims that are being made by vendors, particularly given the current state of affairs, where you know, quite few models are being going through any sort of very rigorous independent verification or adversarial testing. I think there are concerns about harms to privacy. There are concerns about false positives that could sort of paint innocent people as culprits and lead to unjust consequences. And lastly, you know, there’s been research that has shown that malicious actors can manipulate content in order to bypass these automated systems. And this is an issue that’s endemic across AI. And underscoring, you know, even further the need for much more rigorous standards for independent evaluation and testing. So before, you know, we put all of our eggs in one basket, so to speak, I think it’s really important to one, evaluate whether AI, broadly speaking, is up for the task, and then two, to really look under the hood and get a much better picture of what kinds of evaluation and testing are needed to, you know, verify that in fact, these AI systems are working as intended, because by and large, the evidence is indicating that they’re very much not.
Namrata Maheshwari:
Thank you, Sarah. And thank you all so much on the panel. I’ll open it up to all the participants, because I’m sure you have great insights and questions to share as well. Do we have anybody who wants to go first? Great, sure. Could we, before you make your intervention, could you just, in a line, maybe share who you are?
Audience:
Oh, no. Is it? Okay, it’s better now. Good morning, everyone. Or, yeah, still good morning. My name is Katarzyna Staciwa, and I represent the National Research Institute in Poland, although my background is in law enforcement and criminology, and soon also clinical sexology. So, I really want voices of children to be present in this debate, because there were already mentioned in the context of CSAM, which is child sexual abuse material, scanned, and, yeah, on some other occasions. But I think there is a need to make a difference between general monitoring or general scanning, and scanning for this particular type of content. It is such a big difference, because it helps to reduce this horrendous crime. And there are already techniques that can be reliable, like hashes. And by hashes, I also mean experience of hotlines, in-hope hotlines present all over the world. And it’s already experience of, I believe, more than 20 years of this sort of cooperation. So, hashes, they are gathered in a reliable way. There is 3-I verification in the process of stating if a particular photo or video is CSAM. So, it’s not like a general scanning, it’s scanning for something what has been corroborated before by an expert. And then on AI, I’m lucky enough, because my institute is actually working on AI project. And we train our algorithms to detect CSAM in a big bunch of photos or videos. And I can tell you that this is being very successful so far. So, we use also current project by InHope that follows on ontology, specific ontology. So, we train algorithms in a very detailed way to pick up only these materials that are clearly defined in advance. So, and it’s again, it’s an experience of years of cooperation, international cooperation. And I can tell you that general monitoring is something very much different than scanning for photo or video of a six-month-old baby that is being raped. So, please take it into consideration in any future discussions that while we have obligation to take care of privacy and online safety, we first have an obligation to protect children from being harmed. And this is also deeply rooted in all the EU conventions and all the UN conventions and the EU law. So, we have to make a different, we have to make a decision because for some of these children, it will be too late. And I will leave you with this dilemma. Thank you.
Namrata Maheshwari:
Thank you. Thank you so much for that intervention and respect all the work you’re doing. Thank you for sharing that experience. I think one thing that I can say for everybody on the panel and in the room is that all of us are working towards online safety. And I know we’re at a point where we’re identifying similar issues, but looking at the solution from different lenses. So, I do hope that conversations like this lead us to solutions that work for safety and privacy for everybody, including children. So, thank you so much for sharing that. I really value it. Anybody else?
Audience:
Thank you for the great presentation. I’m Arlena Wozniak from European Center for Nonprofit Law. And thank you for your intervention. I’m following up on that. I’d love to hear from you, Alishka. You mentioned the potential misuse of EU regulation. Then, more broadly, how can this kind of child safety narrative can also be used as a slippery slope for other narratives like counterterrorism or fighting human trafficking, which are all laudable goals, which as human rights advocates, we all fight for. And thank you for your mention about child protection. Indeed, online safety applies to all, especially marginalized groups. But I’d love to hear from you how it’s not as easy, it’s not a black or white kind of picture, and how these narratives can often be abused and weaponized to actually prevent encryption.
Eliska Pirkova:
Thank you so much. Great question. And thank you very much for your contribution. From a position of digital rights organization, we, of course, advocate for the online safety and protection of fundamental rights of all. And, of course, children also have their right to be safe and they also have equally right to privacy. And we can go into nitty gritty details on general monitoring, how these technologies work and whether there is any way how general monitoring would not occur. And I think that maybe we would even disagree to some extent. But the point is that the goal is definitely the same for all of us. And especially when it comes to marginalized groups, as Marlena rightly pointed out, it’s a major priority for us, too. But I definitely find it difficult, mainly as an observer, because we truly rely fully on the ADRI network in Brussels, European Digital Rights Network, who leads our work on child sexual abuse material. And I often see that precisely the question of children’s rights is being, to some extent, I would say, I’m trying to find the right term. But the emphasis on that, even though it’s a number one priority for all of us, it can be used in the debate to maybe counter argue against opinions that are slightly more critical towards some technical solutions, while no one ever disputes the need to protect children and that they come first. And that often complicates and maybe becomes, to some extent, almost counterproductive. Because I don’t think that we have any differences in terms of goals that we are trying to achieve. We all are aiming at the same. outcome in the process but perhaps the means and ways and the policy solutions and regulatory solutions that we are aiming at might differ and that’s of course subject to debate and to ongoing negotiations what is that solution and none of them will be ever perfect and there will have to be some compromises made in that regard but I do find this you know dichotomy and more kind of like very straightforward black-and-white differences in terms of when we are doing a rare advocacy and almost occasionally trying to put it in a way that we should choose the side incredibly problematic because there is no need for that I think we all as I said have the same outcome in mind so I don’t know whether I answer your question but indeed this is a very complex and complicated topic and we need to continue having this dialogue as we have today and inform each other position and try to see each other perspective in order to achieve that successful act outcome that we are striving for and that’s the highest level of protection.
Audience:
Thank you, Vagesha and then Hioranth. Hi, thank you. Is this working? Okay. Hi, I’m Vagesha, I’m a PhD scholar at Georgia Tech. Kudos to all of you to condense all of that material into 40 minutes. I mean it’s a vast thing that you’ve covered here. I think I have a comment that will lead to a question so I’ll be quick. Alishka, you mentioned about significant risk in the beginning and I was thinking about how significant risk on any sort of online platforms are often masked by this concerns of national security when it comes to the governments directly right and national security risk could be often subjective and could be related to the context of which government and how they interact with it. So and I think Udbhav also mentioned about how harmful content is a big problem in the space. I mean all of us agree about that. I think my question would be largely alluding to one when you were talking about and this is to everybody when you’re talking about scraping of the content to be used further on how much of the apps that are available online actually store the data in an encrypted format so how big is that problem of you know scraping of that data and it’s encrypted it being an encrypted format and two how do we think about it from a user’s perspective so what can a user do directly to either not solve this problem but intervene in this problem and present their perspectives ahead. Thank you. Thank you.
Namrata Maheshwari:
Actually do you want Udbhav to take the question given the platform reference and could I request everyone to just keep the questions brief so that more people can participate. We have only a few more minutes to go. Thank you. Sure so
Udbhav Tiwari:
on the platform front I think how big is the problem and how pervasive is the problem is an interesting one because on one angle it depends on whose perspective you’re looking at it from. If you’re looking at this from the perspective of an actor that either produces or consumes child sexual abuse materials then it’s arguably a lot of them because this is how the one would argue they communicate with each other and share information with each other which is measures that either aren’t online at all or are ways in which they are encrypted but I think that’s definitely a space that needs a lot more study especially especially going into like what are the vectors in which these like pieces of information are shared and communicated with each other because there has been some research on how much of it is online how much of it is offline how much of it is natural discovery how much of it is you have to seek out the fact that it exists kind of discovery but a lot of that information is both very jurisdiction specific and I think overall has not been answered to the degree that it should be. On what are the things that users themselves can do I mean broadly into three categories right like one is there’s reporting itself because then even on other systems the ability for a user to say I have received or seen content that is like this and I want to tell the platform that this is happening is one route. The second route and this applies to more limited systems is the content exists in this form and I would like to directly take it to the police or to law enforcement agency saying I have it from this user in this way and this is the problem that it’s creating and ultimately the third is for the user and this is like something that a lot there’s been a lot of research on is like intervening at the social level where you talk to your like if it’s somebody you know for why this is kind of problematic and you ask them to get professional psychiatric help and then like they essentially get treated like they have a disease. Platforms can or cannot play a role in this some of them can proactively prompt you to seek help some of them can tell you it’s a crime there are some countries where courts have mandated that these warnings proactively be surfaced and laws too like in India but ultimately I think it’s a area that needs a lot more study which just hasn’t happened so far. Just to very quickly add to that and I’ll pass it on is that all
Namrata Maheshwari:
of this is by no means to say that you know platforms play a role in keeping people safer in a way that governments don’t. By all means we need measures to make platforms more accountable including the ones that are end-to-end encrypted absolutely but the question is just how to do it in a way that most respects fundamental rights. I’ll pass it to you and then to the lady in the back. Online Rianna, Sarah, if there’s anything that you want to add to any question please just raise your hand and we’ll make sure you can come in.
Audience:
Yeah my name is Rao Palme from Electronic Frontier Finland and I don’t have a question but I just like to respond as well to the law enforcement representative here that I’ve lost a lot of credibility in law enforcement to use their tools for what they actually say they use them for. For example in Finland they tried to introduce a censorship tool, well they did in 2008 and in the end it took a hacker to scan the secret list of the police that censorships or censors the websites and we found out like the rationale for the tool went that it has to be used like that because the CSUN material is hosted in countries that our law enforcement doesn’t have access to or even any cooperation and there was this hacker who scanned the internet to find out the secret list he was able to compile about 90% of that list and we actually went through the material and had a look what’s in there. First of all it was like less than 1% was actual child sex abuse material and the second point which I think is even stronger is that guess the biggest country that hosted the material? It was US. After that Netherlands. After that UK. In fact the first 10 countries were Western countries where all you need to do is pick up the phone, call them to take it down. That’s it. Why do they need a censorship tool? The same goes for this kind of client-side scanning. I feel it’s going to be abused, it’s going to be used for different purposes after it goes to gambling and so on. So it’s a real slippery slope and it’s been proven before that that’s how it goes. Thank you and thank you very much for your comment. So I work for iPad International which some of you will know we’re at the forefront of actually advocating for the use of targeted technology to protect children in online environments and I absolutely I mean what’s interesting just even about the people in this room is I think certainly what we’re seeing is an example and we speak for both sides of the conversations being divided and I I’m very happy I’m here and I’m really enjoying this conversation because I absolutely believe in critically challenging our own perspectives and views on different issues and it’s been really interesting to hear particularly the point about Global South and different jurisdictions and it’s absolutely I think we have a system that is working it’s not perfect and there are examples where there have been problems but in general the system is working very well and we could give many other examples of why that is but we need to build on the existing system to expand out into other regions. One of the things I think is interesting and this has been a key theme of the IJF I think for me is this issue of trust in institutions and trust in tech and they’re very difficult to achieve, they’re easy to lose, they’re hard to gain, it’s trust in general. I think on this issue it’s at the forefront of the problem. I think one of the things I always regret is that there isn’t more discussion of why we do agree because there are areas we agree and again one thing that comes up when we deal with issues of trust are issues of transparency whether that’s in processes, algorithmic transparency, oversight, reporting, they’re not perfect but as civil society we can call for accountability so I think that those are areas where we agree and I do wish we were speaking a little bit more about that. In terms of the legislation and general monitoring you’re right we’re not going to go into the details of the processes in the EU but I do think there’s a sometimes a convenient conflation of technology in general and specific technologies that are used for certain things and I think if we talk about targeted CSAM detection tools and spyware they’re not the same thing and I think sometimes there’s a convenient conflation of different texts that are used for different means. The other thing and this is very much to your point about data sets upon which these tools are trained, it’s true that we need to be doing much better at understanding and having data that will avoid any kind of bias in the identification of children but just to this final point one of the reasons for differentiating between hosting of content and which is very much related to internet infrastructure but it is shifting is also that we need to talk about victim identification that one of the reasons to take down and refer child sexual abuse material is that it gets to into processes where children can be identified and we have decades now of experience of very successful processes whereby law enforcement are actually identifying children and disclosing on their behalf because we have to remember that child sexual abuse material is often the only way a child will disclose because children do not disclose and one of the fallacies, I’m sorry I will finish here, one of the fallacies in the debate about the child rights argument is often that we we’re calling for technical solutions as a silver bullet, absolutely not. I think one of the things we all agree on is this is a very complex puzzle and prevention means technology, prevention means education, prevention means safety measures, prevention means you know working with perpetrators, it’s everything that we need to be doing and we’re absolutely calling for that. So I suppose it’s not a question but I wanted to sort of make that point and maybe it’s a question or call to action is that we really need to be around the table together because I think there are areas where we absolutely are agreeing.
Namrata Maheshwari:
Absolutely agree with that and I do hope we’ll have more opportunities to kind of talk on the issues that we all care about. Unfortunately we’re over time already but I know that Rihanna has had her hand raised for a bit so Rihanna do you want to just close us up in one minute?
Riana Pfefferkorn:
Sure so to close I guess I’ll just note that I’ll emphasize something that Alishka said which is that we know that all fundamental rights are meant to be co-equal with no one right taking precedence over any other and how to actually implement that in practice is extremely difficult but it applies to things like child safety as well that are these contentious you know horses that we can get stuck on and that’s the topic of report that I helped author including with an emphasis on child rights as part of the DFR labs recent scaling trust on the web report that goes into more depth into all the different ways that we need to be forward-looking with regard to finding equitable solutions for the various problems of online harms. I also just want to make sure to mention that when it comes to trustworthiness of institutions we do need everybody to be holding governments accountable as well. There was recent reporting that Europol had in some of the closed-door negotiations over the child sex abuse regulation in the EU demanded unlimited access to all data that would be collected and have that be passed on to law enforcement so that they could look for evidence of other crimes not just child safety crimes. So in addition to looking to platforms to do more we also need everybody child safety organizations included to be holding governments to account and ensuring that if they are demanding these powers that they cannot be going beyond those and using that as the tip of the spear with one particular topic to be demanding unfettered access for all sorts of crime investigations because that goes beyond this sort of necessity and proportionality that is the hallmark of a human rights respecting framework. Thanks.
Namrata Maheshwari:
Thank you. A big thank you to all the panelists Sarah, Rihanna, Udbhav, Elishka, Reitz. Thank you for moderating online and thank you all so very much for being here for sharing your thoughts and we hope all of us are able to push our boundaries a little bit and arrive at like a common ground that works for the best of all users online. Thank you so much. Have a great IGF. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Speakers
Audience
Speech speed
168 words per minute
Speech length
1948 words
Speech time
695 secs
Arguments
Need to make a difference between general scanning or monitoring and scanning for specific harmful content like child sexual abuse material (CSAM)
Supporting facts:
- Hashing techniques for verification process
- Experience of hotlines
- AI project by InHope
Topics: CSAM, Child safety, General scanning, Internet monitoring
Protection of children from harm should be a primary obligation
Supporting facts:
- EU conventions and UN conventions on child protection
- Obligation rooted in EU law
Topics: Child protection, Internet safety, CSAM, Law enforcement
Safety and privacy for everyone, including children, should be prioritized when considering online solutions
Supporting facts:
- Namrata and the panel members are working towards online safety solutions
Topics: Online Safety, Privacy, Child Protection
Significant risk on online platforms are often masked by national security concerns
Topics: online safety, national security, privacy
Scraping of data and its storage in an encrypted format on online apps
Topics: data scraping, encryption, data storage
Differences in means and ways of policy solutions for online data collection and safety
Topics: policy solutions, online data safety
Children’s rights can be used to offset the arguments against certain technical solutions for online safety
Topics: children’s rights, online safety, technical solutions
There is a lack of trust in law enforcement’s use of surveillance tools, due to instances when the tools have been misused or overused.
Supporting facts:
- An example given is from Finland, where a censorship tool was introduced in 2008. However, a hacker was able to compile about 90% of the secret list of censored websites, revealing that less than 1% was actual child sex abuse material and much was hosted in Western countries with which Finland has cooperation.
Topics: law enforcement, surveillance, internet censorship
The existing system to protect children online is working but needs improvement, particularly in issues of transparency, expanding to other regions and differentiating between various types of technology.
Topics: online child protection, transparency, technology
Report
During the discussion on child safety in online environments, several speakers emphasised the necessity of prioritising the protection of children from harm. They stressed the importance of distinguishing between general scanning or monitoring and the specific detection of harmful content, particularly child sexual abuse material (CSAM).
This distinction highlighted the need for targeted approaches and solutions to address this critical issue. The use of artificial intelligence (AI) and curated algorithms to identify CSAM content received support from some participants. They mentioned successful implementations in various projects, underlining the potential effectiveness of these advanced technologies in detecting and combating such harmful material.
Specific examples were provided, including the use of hashing techniques for verification processes, the valuable experience of hotlines, and the use of AI in projects undertaken by the organisation InHope. However, concerns were raised regarding the potential misuse of child safety regulations.
There was apprehension that such regulations might extend beyond the intended scope, impeding on other important areas, such as encryption and combating counterterrorism. It was stressed that policymakers should be wary of unintended consequences and not let child safety regulations become a slippery slope for encroaching on other narratives or compromising important tools like encryption.
The participants also emphasised the significance of online safety for everyone, including children, and the need to prioritise this aspect when developing online solutions. Privacy concerns and the protection of personal data were seen as vital considerations, and transparency in online platforms and services was highlighted as a crucial element in building trust and safeguarding users, particularly children.
The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called for greater transparency in these systems, expansion to other regions, and better differentiation between various types of technology. They stressed that a comprehensive approach was required, involving not only the use of targeted technology but also education, safety measures, and addressing the root causes by dealing with perpetrators.
There were also concerns voiced about law enforcement’s use of surveillance tools in relation to child safety. Instances of misuse or overuse of these tools in the past created a lack of trust among some speakers. An example was provided where a censorship tool in Finland resulted in the hacker compiling about 90% of the secret list of censored websites, revealing that less than 1% contained actual child sex abuse material.
In conclusion, the discussion on child safety in online environments highlighted the need to differentiate between general scanning and scanning for specific harmful content. It emphasised the importance of targeted approaches, such as the use of AI and curated algorithms, to detect child sexual abuse material.
However, concerns were raised about the potential misuse of regulations, particularly in the context of encryption and other narratives like counterterrorism. The protection of online safety for everyone, the improvement of existing systems, and a comprehensive approach involving technology, education, and safety measures were identified as crucial elements in effectively protecting children online.
Eliska Pirkova
Speech speed
171 words per minute
Speech length
2097 words
Speech time
734 secs
Arguments
AI tools or content scanning in encrypted environments can have serious consequences for security and safety within the online environment, especially in crisis-hit regions.
Supporting facts:
- The process of content moderation, involving detection, evaluation, and assessment of content is at risk
- Technologies like client-side scanning and weakening encryption are being increasingly used, even in democracies
- These technologies can only identify previously identified illegal content, leading to false positives and negatives
Topics: AI tools, content scanning, encrypted environments, online security, crisis regions
The role and responsibilities of digital platforms significantly increase in the context of crisis, where the state is failing.
Supporting facts:
- Digital platforms often act as a last resort of protection and access to remedies
- They should understand their operational environment and the consequences of complying with government pressures
Topics: digital platforms, role and responsibilities, crisis context
EU’s pending proposal on child sexual abuse material is problematic from a rights perspective
Supporting facts:
- The regulation imposes disproportionate measures on private actors that can only be implemented through technologies like client-side scanning
- Use of such technology could violate the prohibition of general monitoring
Topics: EU regulation, Content governance, Client-side scanning, Child sexual abuse material
The Brussels effect will be triggered by the EU’s regulation of digital platforms
Supporting facts:
- Many jurisdictions around the world have followed the regulatory approach of the EU
- Even measures aimed at achieving positive outcomes can be significantly abused if they end in the wrong systems
Topics: Brussels effect, Digital platforms, Content governance
Emphasis on children’s rights can sometimes be used to counter argue against critical views towards technical solutions
Supporting facts:
- Everyone strives for the same outcome – protection of children
- The means and ways, policy and regulatory solutions might differ
- The debate becomes counterproductive when it’s seen as choosing sides
Topics: child safety, online safety, encryption
Report
The analysis of the arguments reveals several important points regarding the use of technology in different contexts. One argument highlights the potential consequences of using AI tools or content scanning in encrypted environments, particularly in crisis-hit regions. The increasing use of such technologies, even in democracies, is a cause for concern as they can only identify known illegal content, leading to inaccuracies.
Another argument raises concerns about risk-driven regulations, suggesting that they might weaken the rule of law and accountability. The vague definition of ‘significant risks’ in legislative proposals is seen as providing justification for deploying certain technologies. The need for independent judicial bodies to support detection orders is emphasized to ensure proper safeguards.
Digital platforms are seen as having a significant role and responsibilities, particularly in crisis contexts where the state is failing. They act as the last resort for protection and access to remedies. It is crucial for digital platforms to consider the operational environment and the consequences of complying with government pressures.
The pending proposal by the European Union (EU) on child sexual abuse material is seen as problematic from a rights perspective. It disproportionately imposes measures on private actors that can only be implemented through technologies like client-side scanning. This raises concerns about potential violations of the prohibition of general monitoring.
Similar concerns are expressed regarding the impact of the EU’s ongoing, still-negotiated proposal in relation to the existing digital services act. If the proposal remains in its current form, there could be direct violation issues. The argument also suggests that the EU’s legitimization of certain tools could lead to their misuse by other governments.
The global implications of the EU’s regulatory approach, known as the Brussels effect, are also discussed. Many jurisdictions worldwide have followed the EU’s approach, which means that well-intentioned measures may be significantly abused if they end up in inappropriate systems.
The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal. However, differing means, policy approaches, and regulatory solutions may generate counterproductive debates when critical views towards technical solutions are dismissed. In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts, particularly concerning online security, accountability, and rights protection.
Dialogue and negotiations among stakeholders are crucial to understand different perspectives and reach compromises. Inclusive and representative decision-making processes are essential for addressing the challenges posed by technology.
Namrata Maheshwari
Speech speed
177 words per minute
Speech length
2092 words
Speech time
709 secs
Arguments
Online safety is a shared goal, even if various stakeholders approach the issue from different perspectives
Supporting facts:
- The conversation focuses on finding solutions that work for safety and privacy for everyone, including children
Topics: Online Safety, Children’s Safety, Internet Privacy
Report
The discussion revolves around the crucial topic of online safety and privacy, with a specific emphasis on protecting children. While there may be various stakeholders with different perspectives, they all share a common goal of ensuring online safety for everyone.
The conversation acknowledges the challenges and complexities associated with this issue, aiming to find effective solutions that work for all parties involved. In line with SDG 16.2, which aims to end abuse, exploitation, trafficking, and violence against children, the discussion highlights the urgency and importance of addressing online safety concerns.
It acknowledges that protecting children from online threats is not only a moral imperative but also a fundamental human right. The inclusion of this SDG demonstrates the global significance of this issue and the need for collective efforts to tackle it.
One notable aspect of the conversation is the recognition and respect given to the role of Artificial Intelligence (AI) in detecting child sexual abuse material (CSAM). Namrata Maheshwari expresses appreciation for the interventions and advancements being made in this area.
The use of AI in detecting CSAM is a critical tool in combating child exploitation and safeguarding children from harm. The conversation highlights the need for collaboration and cooperation among various stakeholders, including government authorities, tech companies, educators, and parents, to effectively address online safety concerns.
It emphasizes the shared responsibility in creating a safe online environment for children, where their privacy and security are protected. Overall, this discussion underscores the significance of online safety and privacy, particularly for children. It highlights the importance of aligning efforts with global goals, such as SDG 16.2, and recognizes the positive impact that technology, specifically AI, can have in combating online threats.
By working together and adopting comprehensive strategies, we can create a safer and more secure digital space for children.
Riana Pfefferkorn
Speech speed
189 words per minute
Speech length
2000 words
Speech time
636 secs
Arguments
The idea of scanning encrypted content without undermining privacy and security is currently technically unfeasible
Supporting facts:
- Researchers have been working on the problem but have not found a solution
- The UK government has acknowledged that the process isn’t technically feasible at the moment
Topics: End-to-end encryption, Content scanning, Online safety regulations
Content oblivious techniques can be as effective as content dependent ones for detecting harmful content online
Supporting facts:
- Survey results indicated that a content oblivious technique was deemed to be as or more useful than a content dependent one for almost every category of abuse
- User reporting in particular prevailed across many categories of abuse
Topics: online safety, content oblivious techniques, online content moderation
Metadata analysis, while effective, has major privacy trade-offs
Supporting facts:
- Metadata analysis requires the service to collect and analyze substantial data about its users which could intrude into user privacy
- Some services like Signal purposefully collect minimal data about their users to protect user privacy
Topics: metadata analysis, privacy, online content moderation
All fundamental rights are meant to be co-equal with no one right taking precedence over any other
Supporting facts:
- Implementation of this principle in practice is extremely difficult.
- This principle applies to contentious issues like child safety.
Topics: child safety, data protection, privacy, online harms
Trustworthiness of institutions requires everyone to hold governments accountable
Supporting facts:
- Europol demanded unlimited access to all data that would be collected for crime investigation, not just child safety crimes
- This demand goes beyond the necessity and proportionality that is the hallmark of a human rights respecting framework.
Topics: government accountability, data privacy, child safety, crime investigations
Report
The analysis explores various arguments and stances on the contentious issue of scanning encrypted content. One argument put forth is that scanning encrypted content, while protecting privacy and security, is currently not technically feasible. Researchers have been working on this problem, but no solution has been found.
The UK government has also acknowledged this limitation. This argument highlights the challenges of striking a balance between enforcing online safety regulations and maintaining the privacy and security of encrypted content. Another argument cautions against forced scanning of encrypted content by governments.
This argument emphasizes that such scanning could potentially be expanded to include a wide range of prohibited content, jeopardizing the privacy and safety of individuals and groups such as journalists, dissidents, and human rights workers. It is argued that any law mandating scanning could be used to search for any type of prohibited content, not just child sex abuse material.
The risk extends to anyone who relies on secure and confidential communication. This argument underscores the potential negative consequences of forced scanning on privacy and the free flow of information. However, evidence suggests that content-oblivious techniques can be as effective as content-dependent ones in detecting harmful content online.
Survey results support this notion, indicating that a content-oblivious technique was considered equal to or more useful than a content-dependent one in almost every category of abuse. User reporting, in particular, emerged as a prevalent method across many abuse categories.
This argument highlights the effectiveness of content-oblivious techniques and user reporting in identifying and mitigating harmful online content. Furthermore, it is argued that end-to-end encrypted services should invest in robust user reporting flows. User reporting has been found to be the most effective detection method for multiple types of abusive content.
It is also seen as a privacy-preserving option for combating online abuse. This argument emphasizes the importance of empowering users to report abusive content and creating a supportive environment for reporting. On the topic of metadata analysis, it is noted that while effective, this approach comes with significant privacy trade-offs.
Metadata analysis requires services to collect and analyze substantial data about their users, which can intrude on user privacy. Some services, such as Signal, purposely collect minimal data to protect user privacy. This argument highlights the need to consider privacy concerns when implementing metadata analysis for online content moderation.
The analysis concludes by emphasizing the need for both advocates for civil liberties and governments or vendors to recognize and acknowledge the trade-offs inherent in any abuse detection mechanism. There is no abuse detection mechanism that is entirely beneficial without drawbacks.
It is crucial to acknowledge and address the potential negative consequences of any proposed solution. This conclusion underscores the importance of finding a balanced approach that respects both privacy and online safety. The analysis also discusses the challenging practical implementation of co-equal fundamental rights.
It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single right taking precedence over another. The difficulty lies in effectively implementing this principle in practice, particularly in contentious areas like child safety. Furthermore, the analysis highlights the importance of holding governments accountable for maintaining trustworthiness.
It is argued that unrestricted government access to data under the guise of child safety can exceed the necessity and proportionality required in a human rights-respecting framework. Trustworthiness of institutions hinges on the principle of government accountability. In summary, the analysis provides insights into the complications surrounding the scanning of encrypted content and the trade-offs associated with different approaches.
It emphasizes the need for a balanced approach that considers privacy, online safety, and fundamental rights. Acknowledging the limitations and potential risks associated with each proposed solution is crucial for finding effective and ethical methods of content moderation.
Sarah Myers West
Speech speed
156 words per minute
Speech length
762 words
Speech time
294 secs
Arguments
Artificial intelligence can be misleading and misunderstood leading to flawed policies.
Supporting facts:
- AI has been a value laden term in policy conversations.
- There is a frequent tendency in the field to see claims about AI being able to serve certain purposes that lack any underlying validation or testing.
Topics: Artificial Intelligence, AI Policies, Misrepresentation
AI in the present-day moment is essentially a computational process based on statistical methods applied to large data sets.
Supporting facts:
- Current definition of AI involves application of statistical methods to very large data sets either produced through commercial surveillance or massive amounts of web scraping.
Topics: Artificial Intelligence, Data Analysis, Computational Process
Content moderation is an almost intractable problem and AI, though attractive, is not a straightforward solution
Supporting facts:
- AI is imperfect
- Few models go through rigorous independent verification or adversarial testing
- Concerns about harms to privacy and false positives are troubling problems introduced by AI
- Research has shown that malicious actors can manipulate content to bypass these AI-based systems
Topics: Artificial Intelligence, Content Moderation
Report
The analysis provides a comprehensive overview of the concerns and criticisms surrounding artificial intelligence (AI). One notable concern is that AI can be misleading and misunderstood, leading to flawed policies. It is argued that in the field, there is a tendency to make claims about AI without proper validation or testing, which undermines trust in the technology.
At present, AI is primarily seen as a computational process that applies statistical methods to large datasets. These datasets are often acquired through commercial surveillance or extensive web scraping. This definition emphasizes the reliance on data-driven approaches to derive insights and make predictions.
However, the ethical implications of this reliance on data need to be considered, as biases and inequalities can be perpetuated and amplified by AI systems. The lack of validation in AI claims is another cause for concern. Many AI systems are said to serve specific purposes without undergoing rigorous testing or validation processes.
Discrepancies and problems often go unnoticed until auditing or other retrospective methods are employed. The absence of transparency and accountability in AI claims raises questions about the reliability and effectiveness of AI systems in various domains. Furthermore, it is evident that AI systems have the potential to mimic and amplify societal inequality.
Studies have shown that AI can replicate patterns of discrimination and exacerbate existing inequalities. Discrimination within AI systems can have adverse effects on historically marginalised populations. This highlights the importance of considering the social impact and ethical implications of AI deployment.
In terms of content moderation, AI is often seen as an attractive solution. However, it is acknowledged that it presents challenges that are difficult to overcome. For example, AI-based content moderation systems are imperfect and can lead to violations of privacy as well as false positive identifications.
Malicious actors can also manipulate content to bypass these AI systems, raising concerns about the effectiveness of AI in tackling content moderation issues. To address these concerns, there is a need for more scrutiny and critical evaluation of the use of AI in content moderation.
Establishing rigorous standards for independent evaluation and testing is crucial to ensure the effectiveness and ethical use of AI technology. This approach can help mitigate the risks associated with privacy violations, false positives, and content manipulation. In conclusion, the analysis underscores the importance of addressing the concerns and criticisms related to AI.
The potential for misrepresentation and flawed policies, the lack of validation and transparency in AI claims, the amplification of societal inequality, and the challenges in content moderation highlight the need for thoughtful and responsible development and deployment of AI technologies.
Ethical considerations, rigorous testing, and ongoing evaluation should be central to AI research and implementation to ensure that the benefits of AI can be realized while mitigating potential harms.
Udbhav Tiwari
Speech speed
194 words per minute
Speech length
2785 words
Speech time
863 secs
Arguments
Technically feasible to develop tools that are limited to scanning certain types of content but difficult to verify reliability and trustworthiness
Supporting facts:
- Platforms already do certain kinds of scanning for unencrypted content
- Mozilla’s experience suggests difficulty in verifying reliability and trustworthiness of such systems
- No system has undergone the level of independent testing and rigorous analysis required
Topics: Content scanning, Client-side scanning, Tool development, Trust and safety
Law and policy aspect of content scanning is more worrying as governments may use the technology for surveillance
Supporting facts:
- Once technological capabilities exist, governments would want to utilize it for whatever content they deem worth detecting
- Ability of companies to resist requests or directives from governments has reduced over time
- Example of iCloud’s separate technical infrastructures due to requests from government
Topics: Content scanning, Law and policy, Government surveillance
The conversation around client-side scanning is much more nuanced in the global majority
Supporting facts:
- Client-side scanning already exists in China, evidenced by the Green Dam software
- Government measures and regulations float under the radar and don’t get international attention
Topics: Client-side scanning, Policy, Privacy, Surveillance
How pervasive child sexual abuse materials are depends on the perspective
Supporting facts:
- The problem seems more serious when viewed from the perspective of actors producing or consuming such materials.
- It’s argued that the actors use encrypted ways of communication or non-online methods.
Topics: Child Sexual Abuse, Online Safety
Need for more study regarding the vectors of communication
Supporting facts:
- Research on the matter has been jurisdiction-specific and inadequate.
- Important to understand how information about child sexual abuse material is shared and communicated.
Topics: Research, Child Sexual Abuse
Report
The analysis conducted on content scanning and online safety highlights several significant points. One of the main findings is that while it is technically possible to develop tools for scanning certain types of content, ensuring their reliability and trustworthiness is a difficult task.
Platforms already perform certain forms of scanning for unencrypted content. However, Mozilla’s experience suggests that verifying the reliability and trustworthiness of such systems poses challenges. Currently, no system has undergone the level of independent testing and rigorous analysis required to ensure their effectiveness.
Another concerning aspect of content scanning is the involvement of governments. The analysis reveals that once technological capabilities exist, governments are likely to leverage them to detect content deemed worthy of attention. This raises concerns about the potential misuse of content scanning technology for surveillance purposes.
Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this is seen in the implementation of separate technical infrastructures for iCloud due to government requests. Therefore, the law and policy aspect of content scanning can be more worrying than the technical feasibility itself.
The importance of balancing the removal of harmful content with privacy concerns is emphasized. Mozilla’s decision not to proceed with scanning content on Firefox Send due to privacy concerns demonstrates the need to find a middle ground. The risk of constant content scanning on individual devices and the potential scanning of all content is a significant concern.
Different trust and safety measures exist for various use cases of end-to-end encryption. The analysis brings attention to client-side scanning, which already exists in China through software like Green Dam. It highlights the fact that the conversation surrounding client-side scanning worldwide is more nuanced than commonly acknowledged.
Government measures and regulations pertaining to client-side scanning often go unnoticed on an international scale. Platforms also need to invest more in understanding local contexts to improve enforcement. The study revealed that identifying secret Child Sexual Abuse Material (CSAM) keywords in different languages takes platforms years, suggesting a gap in their ability to effectively address the issue.
Platforms have shown a better record of enforcement in English than in the global majority, indicating a need for more investment and understanding of local contexts. The issue of child sexual abuse material is highlighted from different perspectives. The extent to which child sexual abuse materials are pervasive depends on the vantage point.
The analysis reveals that actors involved in producing or consuming such content often employ encrypted communication or non-online methods, making it difficult to fully grasp the magnitude of the problem. Further research is needed to understand the vectors of communication related to child sexual abuse material.
Finally, the analysis stresses that users have the ability to take action to address objectionable content. They can report such content on platforms, directly involve law enforcement, or intervene at a social level by reaching out to the individuals involved.
Seeking professional psychiatric help for individuals connected to objectionable content is also important. In conclusion, the analysis of content scanning and online safety identifies various issues and concerns. It emphasizes the need to balance the removal of harmful content with privacy considerations while cautioning against potential government surveillance practices.
Furthermore, the study underscores the importance of understanding local contexts for effective enforcement. The issue of child sexual abuse material is found to be complex, requiring further research. Finally, users are encouraged to take an active role in addressing objectionable content through reporting, involving law enforcement, and social intervention.