The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28

10 Oct 2023 01:15h - 02:15h UTC

Event report

Speakers and Moderators

Speakers:
  • Dr. Sarah Myers West, AI Now
  • Udbhav Tiwari, Mozilla
  • Riana Pfefferkorn, Stanford Internet Observatory
  • Eliska Pirkova, Access Now
Moderators:
  • Namrata Maheshwari, Access Now
  • Daniel Leufer, Access Now

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Sarah Myers West

The analysis provides a comprehensive overview of the concerns and criticisms surrounding artificial intelligence (AI). One notable concern is that AI can be misleading and misunderstood, leading to flawed policies. It is argued that in the field, there is a tendency to make claims about AI without proper validation or testing, which undermines trust in the technology.

At present, AI is primarily seen as a computational process that applies statistical methods to large datasets. These datasets are often acquired through commercial surveillance or extensive web scraping. This definition emphasizes the reliance on data-driven approaches to derive insights and make predictions. However, the ethical implications of this reliance on data need to be considered, as biases and inequalities can be perpetuated and amplified by AI systems.

The lack of validation in AI claims is another cause for concern. Many AI systems are said to serve specific purposes without undergoing rigorous testing or validation processes. Discrepancies and problems often go unnoticed until auditing or other retrospective methods are employed. The absence of transparency and accountability in AI claims raises questions about the reliability and effectiveness of AI systems in various domains.

Furthermore, it is evident that AI systems have the potential to mimic and amplify societal inequality. Studies have shown that AI can replicate patterns of discrimination and exacerbate existing inequalities. Discrimination within AI systems can have adverse effects on historically marginalised populations. This highlights the importance of considering the social impact and ethical implications of AI deployment.

In terms of content moderation, AI is often seen as an attractive solution. However, it is acknowledged that it presents challenges that are difficult to overcome. For example, AI-based content moderation systems are imperfect and can lead to violations of privacy as well as false positive identifications. Malicious actors can also manipulate content to bypass these AI systems, raising concerns about the effectiveness of AI in tackling content moderation issues.

To address these concerns, there is a need for more scrutiny and critical evaluation of the use of AI in content moderation. Establishing rigorous standards for independent evaluation and testing is crucial to ensure the effectiveness and ethical use of AI technology. This approach can help mitigate the risks associated with privacy violations, false positives, and content manipulation.

In conclusion, the analysis underscores the importance of addressing the concerns and criticisms related to AI. The potential for misrepresentation and flawed policies, the lack of validation and transparency in AI claims, the amplification of societal inequality, and the challenges in content moderation highlight the need for thoughtful and responsible development and deployment of AI technologies. Ethical considerations, rigorous testing, and ongoing evaluation should be central to AI research and implementation to ensure that the benefits of AI can be realized while mitigating potential harms.

Audience

During the discussion on child safety in online environments, several speakers emphasised the necessity of prioritising the protection of children from harm. They stressed the importance of distinguishing between general scanning or monitoring and the specific detection of harmful content, particularly child sexual abuse material (CSAM). This distinction highlighted the need for targeted approaches and solutions to address this critical issue.

The use of artificial intelligence (AI) and curated algorithms to identify CSAM content received support from some participants. They mentioned successful implementations in various projects, underlining the potential effectiveness of these advanced technologies in detecting and combating such harmful material. Specific examples were provided, including the use of hashing techniques for verification processes, the valuable experience of hotlines, and the use of AI in projects undertaken by the organisation InHope.

However, concerns were raised regarding the potential misuse of child safety regulations. There was apprehension that such regulations might extend beyond the intended scope, impeding on other important areas, such as encryption and combating counterterrorism. It was stressed that policymakers should be wary of unintended consequences and not let child safety regulations become a slippery slope for encroaching on other narratives or compromising important tools like encryption.

The participants also emphasised the significance of online safety for everyone, including children, and the need to prioritise this aspect when developing online solutions. Privacy concerns and the protection of personal data were seen as vital considerations, and transparency in online platforms and services was highlighted as a crucial element in building trust and safeguarding users, particularly children.

The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called for greater transparency in these systems, expansion to other regions, and better differentiation between various types of technology. They stressed that a comprehensive approach was required, involving not only the use of targeted technology but also education, safety measures, and addressing the root causes by dealing with perpetrators.

There were also concerns voiced about law enforcement’s use of surveillance tools in relation to child safety. Instances of misuse or overuse of these tools in the past created a lack of trust among some speakers. An example was provided where a censorship tool in Finland resulted in the hacker compiling about 90% of the secret list of censored websites, revealing that less than 1% contained actual child sex abuse material.

In conclusion, the discussion on child safety in online environments highlighted the need to differentiate between general scanning and scanning for specific harmful content. It emphasised the importance of targeted approaches, such as the use of AI and curated algorithms, to detect child sexual abuse material. However, concerns were raised about the potential misuse of regulations, particularly in the context of encryption and other narratives like counterterrorism. The protection of online safety for everyone, the improvement of existing systems, and a comprehensive approach involving technology, education, and safety measures were identified as crucial elements in effectively protecting children online.

Namrata Maheshwari

The discussion revolves around the crucial topic of online safety and privacy, with a specific emphasis on protecting children. While there may be various stakeholders with different perspectives, they all share a common goal of ensuring online safety for everyone. The conversation acknowledges the challenges and complexities associated with this issue, aiming to find effective solutions that work for all parties involved.

In line with SDG 16.2, which aims to end abuse, exploitation, trafficking, and violence against children, the discussion highlights the urgency and importance of addressing online safety concerns. It acknowledges that protecting children from online threats is not only a moral imperative but also a fundamental human right. The inclusion of this SDG demonstrates the global significance of this issue and the need for collective efforts to tackle it.

One notable aspect of the conversation is the recognition and respect given to the role of Artificial Intelligence (AI) in detecting child sexual abuse material (CSAM). Namrata Maheshwari expresses appreciation for the interventions and advancements being made in this area. The use of AI in detecting CSAM is a critical tool in combating child exploitation and safeguarding children from harm.

The conversation highlights the need for collaboration and cooperation among various stakeholders, including government authorities, tech companies, educators, and parents, to effectively address online safety concerns. It emphasizes the shared responsibility in creating a safe online environment for children, where their privacy and security are protected.

Overall, this discussion underscores the significance of online safety and privacy, particularly for children. It highlights the importance of aligning efforts with global goals, such as SDG 16.2, and recognizes the positive impact that technology, specifically AI, can have in combating online threats. By working together and adopting comprehensive strategies, we can create a safer and more secure digital space for children.

Udbhav Tiwari

The analysis conducted on content scanning and online safety highlights several significant points. One of the main findings is that while it is technically possible to develop tools for scanning certain types of content, ensuring their reliability and trustworthiness is a difficult task. Platforms already perform certain forms of scanning for unencrypted content. However, Mozilla’s experience suggests that verifying the reliability and trustworthiness of such systems poses challenges. Currently, no system has undergone the level of independent testing and rigorous analysis required to ensure their effectiveness.

Another concerning aspect of content scanning is the involvement of governments. The analysis reveals that once technological capabilities exist, governments are likely to leverage them to detect content deemed worthy of attention. This raises concerns about the potential misuse of content scanning technology for surveillance purposes. Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this is seen in the implementation of separate technical infrastructures for iCloud due to government requests. Therefore, the law and policy aspect of content scanning can be more worrying than the technical feasibility itself.

The importance of balancing the removal of harmful content with privacy concerns is emphasized. Mozilla’s decision not to proceed with scanning content on Firefox Send due to privacy concerns demonstrates the need to find a middle ground. The risk of constant content scanning on individual devices and the potential scanning of all content is a significant concern. Different trust and safety measures exist for various use cases of end-to-end encryption.

The analysis brings attention to client-side scanning, which already exists in China through software like Green Dam. It highlights the fact that the conversation surrounding client-side scanning worldwide is more nuanced than commonly acknowledged. Government measures and regulations pertaining to client-side scanning often go unnoticed on an international scale.

Platforms also need to invest more in understanding local contexts to improve enforcement. The study revealed that identifying secret Child Sexual Abuse Material (CSAM) keywords in different languages takes platforms years, suggesting a gap in their ability to effectively address the issue. Platforms have shown a better record of enforcement in English than in the global majority, indicating a need for more investment and understanding of local contexts.

The issue of child sexual abuse material is highlighted from different perspectives. The extent to which child sexual abuse materials are pervasive depends on the vantage point. The analysis reveals that actors involved in producing or consuming such content often employ encrypted communication or non-online methods, making it difficult to fully grasp the magnitude of the problem. Further research is needed to understand the vectors of communication related to child sexual abuse material.

Finally, the analysis stresses that users have the ability to take action to address objectionable content. They can report such content on platforms, directly involve law enforcement, or intervene at a social level by reaching out to the individuals involved. Seeking professional psychiatric help for individuals connected to objectionable content is also important.

In conclusion, the analysis of content scanning and online safety identifies various issues and concerns. It emphasizes the need to balance the removal of harmful content with privacy considerations while cautioning against potential government surveillance practices. Furthermore, the study underscores the importance of understanding local contexts for effective enforcement. The issue of child sexual abuse material is found to be complex, requiring further research. Finally, users are encouraged to take an active role in addressing objectionable content through reporting, involving law enforcement, and social intervention.

Eliska Pirkova

The analysis of the arguments reveals several important points regarding the use of technology in different contexts. One argument highlights the potential consequences of using AI tools or content scanning in encrypted environments, particularly in crisis-hit regions. The increasing use of such technologies, even in democracies, is a cause for concern as they can only identify known illegal content, leading to inaccuracies.

Another argument raises concerns about risk-driven regulations, suggesting that they might weaken the rule of law and accountability. The vague definition of ‘significant risks’ in legislative proposals is seen as providing justification for deploying certain technologies. The need for independent judicial bodies to support detection orders is emphasized to ensure proper safeguards.

Digital platforms are seen as having a significant role and responsibilities, particularly in crisis contexts where the state is failing. They act as the last resort for protection and access to remedies. It is crucial for digital platforms to consider the operational environment and the consequences of complying with government pressures.

The pending proposal by the European Union (EU) on child sexual abuse material is seen as problematic from a rights perspective. It disproportionately imposes measures on private actors that can only be implemented through technologies like client-side scanning. This raises concerns about potential violations of the prohibition of general monitoring.

Similar concerns are expressed regarding the impact of the EU’s ongoing, still-negotiated proposal in relation to the existing digital services act. If the proposal remains in its current form, there could be direct violation issues. The argument also suggests that the EU’s legitimization of certain tools could lead to their misuse by other governments.

The global implications of the EU’s regulatory approach, known as the Brussels effect, are also discussed. Many jurisdictions worldwide have followed the EU’s approach, which means that well-intentioned measures may be significantly abused if they end up in inappropriate systems.

The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal. However, differing means, policy approaches, and regulatory solutions may generate counterproductive debates when critical views towards technical solutions are dismissed.

In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts, particularly concerning online security, accountability, and rights protection. Dialogue and negotiations among stakeholders are crucial to understand different perspectives and reach compromises. Inclusive and representative decision-making processes are essential for addressing the challenges posed by technology.

Riana Pfefferkorn

The analysis explores various arguments and stances on the contentious issue of scanning encrypted content. One argument put forth is that scanning encrypted content, while protecting privacy and security, is currently not technically feasible. Researchers have been working on this problem, but no solution has been found. The UK government has also acknowledged this limitation. This argument highlights the challenges of striking a balance between enforcing online safety regulations and maintaining the privacy and security of encrypted content.

Another argument cautions against forced scanning of encrypted content by governments. This argument emphasizes that such scanning could potentially be expanded to include a wide range of prohibited content, jeopardizing the privacy and safety of individuals and groups such as journalists, dissidents, and human rights workers. It is argued that any law mandating scanning could be used to search for any type of prohibited content, not just child sex abuse material. The risk extends to anyone who relies on secure and confidential communication. This argument underscores the potential negative consequences of forced scanning on privacy and the free flow of information.

However, evidence suggests that content-oblivious techniques can be as effective as content-dependent ones in detecting harmful content online. Survey results support this notion, indicating that a content-oblivious technique was considered equal to or more useful than a content-dependent one in almost every category of abuse. User reporting, in particular, emerged as a prevalent method across many abuse categories. This argument highlights the effectiveness of content-oblivious techniques and user reporting in identifying and mitigating harmful online content.

Furthermore, it is argued that end-to-end encrypted services should invest in robust user reporting flows. User reporting has been found to be the most effective detection method for multiple types of abusive content. It is also seen as a privacy-preserving option for combating online abuse. This argument emphasizes the importance of empowering users to report abusive content and creating a supportive environment for reporting.

On the topic of metadata analysis, it is noted that while effective, this approach comes with significant privacy trade-offs. Metadata analysis requires services to collect and analyze substantial data about their users, which can intrude on user privacy. Some services, such as Signal, purposely collect minimal data to protect user privacy. This argument highlights the need to consider privacy concerns when implementing metadata analysis for online content moderation.

The analysis concludes by emphasizing the need for both advocates for civil liberties and governments or vendors to recognize and acknowledge the trade-offs inherent in any abuse detection mechanism. There is no abuse detection mechanism that is entirely beneficial without drawbacks. It is crucial to acknowledge and address the potential negative consequences of any proposed solution. This conclusion underscores the importance of finding a balanced approach that respects both privacy and online safety.

The analysis also discusses the challenging practical implementation of co-equal fundamental rights. It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single right taking precedence over another. The difficulty lies in effectively implementing this principle in practice, particularly in contentious areas like child safety.

Furthermore, the analysis highlights the importance of holding governments accountable for maintaining trustworthiness. It is argued that unrestricted government access to data under the guise of child safety can exceed the necessity and proportionality required in a human rights-respecting framework. Trustworthiness of institutions hinges on the principle of government accountability.

In summary, the analysis provides insights into the complications surrounding the scanning of encrypted content and the trade-offs associated with different approaches. It emphasizes the need for a balanced approach that considers privacy, online safety, and fundamental rights. Acknowledging the limitations and potential risks associated with each proposed solution is crucial for finding effective and ethical methods of content moderation.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more