Unlocking Trust and Safety to Preserve the Open Internet | IGF 2023 Open Forum #129

11 Oct 2023 05:45h - 07:15h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Brent Carey

New Zealand has shown its commitment to online safety by enacting the Harmful Digital Communications Act in 2016. This legislation takes a principles-based approach to address various forms of online harm, including incitement to commit suicide, breach of confidentiality, and harassment. The act covers both criminal and civil aspects, with NetSafe, a government-approved NGO agency, managing the civil side.

NetSafe plays a vital role in helping New Zealanders resolve online disputes through mediation. Each year, over 25,000 individuals seek assistance from NetSafe, with more than 7,000 engaging in the mediation process. This demonstrates the effectiveness of NetSafe in providing a platform for conflict resolution in the digital realm.

NetSafe has also led the development of the ‘Aotearoa Online Safety Code’, launched in July 2022. This code, supported by major platforms like TikTok, Meta, Amazon, Twitch, and Twitter, addresses issues such as hate speech, disinformation, and misinformation. By adopting risk-based approaches, the code aims to tackle these challenges and create a safer digital environment.

New Zealand is actively seeking innovative approaches to address emerging online harms and learn from global best practices. They have produced a discussion paper titled ‘Safer Online Services and Media Platforms Bill’ to explore content regulation. Additionally, NetSafe participates as an observer in global regulators’ forums, engaging in relevant discussions.

To ensure a comprehensive and collaborative approach to internet safety, different stakeholders need to come together for discussions. This collaboration within the digital ecosystem creates spaces and opportunities for diverse parts of the infrastructure to engage in meaningful conversations.

Industry-led interventions are considered crucial in promoting online safety. By providing a platform for different voices, these interventions contribute to a balanced and effective response to online threats.

However, the regulation of platforms has raised concerns about the withdrawal of news and media plurality. Some platforms have expressed plans to withdraw and stop providing news in response to regulations such as the Fair Digital Media Bill. This highlights the challenge of balancing regulation with maintaining a diverse media landscape.

The importance of media plurality and media literacy is widely acknowledged. Media plurality is seen as crucial for a vibrant civil society, while media literacy empowers individuals to critically navigate the digital world.

The existing media landscape is undergoing significant transformations due to the influence of both old and new media. Brent Carey suggests that understanding these changing dynamics and effective responses are necessary in this evolving landscape.

Preserving online privacy is of utmost importance, and New Zealand has implemented stringent measures to tackle privacy violations. The Harmful Digital Communications Act imposes penalties of a $50,000 fine or two years imprisonment for posting intimate images without consent. The New Zealand police actively prosecute such offenses, emphasizing the seriousness of this issue.

Brent Carey supports severe repercussions for online privacy violations and highlights the effectiveness of the Harmful Digital Communications Act in addressing such breaches.

Encouraging the online industry to uphold the highest standards of safety and corporate citizenship is essential. Brent Carey believes in striving for the highest standards, rather than settling for lower ones exhibited by certain platforms. Companies like Twitter have taken steps in this direction, as evidenced by Brent Carey’s involvement with Twitter’s Trust and Safety Council and their commitment to online safety through localized data.

It is worth noting that the discussion did not cover the Judaic argument, as Brent Carey explicitly declined to discuss it. This suggests that certain limitations or sensitivities may exist concerning specific topics within the broader discourse of online safety and governance.

In conclusion, New Zealand’s enactment of the Harmful Digital Communications Act reflects its commitment to online safety. NetSafe’s mediation services and the ‘Aotearoa Online Safety Code’ further enhance efforts to address online disputes and tackle issues such as hate speech and misinformation. New Zealand actively explores innovative strategies and seeks global best practices to combat emerging online harms. Collaboration among stakeholders is crucial for effective internet safety, and industry-led interventions play a vital role. However, challenges remain regarding platform regulation and media plurality. Preserving online privacy and promoting the highest standards of safety and corporate citizenship are key priorities.

Rishika Chandra

Fiji is at the forefront of recognising the significance of online safety and has taken concrete steps to ensure a secure digital environment for its citizens. In 2018, Fiji enacted the Online Safety Act, which laid the foundation for the establishment of the Online Safety Commission in 2019. The commission has made considerable progress in organising awareness and education programmes to educate people about potential risks and equip them with the necessary tools to protect themselves online.

Furthermore, Fiji has been actively involved in fostering international cooperation and knowledge sharing in tackling online abuse through its participation in the Global Online Safety Regulators Network. Formed in 2022, the network includes members from Fiji, the UK, Australia, Ireland, Africa, and Korea. This collaboration has been instrumental in promoting the exchange of ideas and experiences in combating online abuse on a global scale.

The partnership between the Online Safety Commission and eSafety Commissioner Australia, along with social media platforms such as Meta and TikTok, plays a crucial role in promoting online safety. Under this arrangement, the organisations work together to support online safety in Fiji and Australia by sharing best practices, raising awareness of online safety trends and emerging issues, and developing national online safety strategies. One of the primary ways they collaborate with these tech companies is through their content reporting systems, which enable users to report harmful and inappropriate content for swift action.

Governments around the world face the challenge of balancing regulations on online content and data privacy without infringing upon individuals’ rights to free speech or impeding innovation. While it is important to protect users from harmful content or cyber threats, it is equally essential to ensure that regulations do not stifle freedom of expression or impede the progress of technological advancements.

Fiji has taken a strong stance against online harassment, cyberbullying, image-based abuse, and child exploitation, criminalising these offences. The penalties are significant, including imprisonment and fines. However, it is worth noting that defamation is not covered under Fiji’s Online Safety Act.

To effectively regulate social media platforms, Fijians need a better understanding of their design, policies, and community guidelines. It is crucial for individuals to be aware of how these platforms work to navigate them safely. While social media platforms can be dangerous, they also serve as a means of connectivity and communication.

Building strong relationships and collaborations with social media platforms is vital in achieving a balance between regulation and individual rights. By working in a collaborative manner with these platforms, it becomes possible to address online safety concerns effectively.

In conclusion, Fiji’s commitment to online safety is commendable, with the enactment of the Online Safety Act and the establishment of the Online Safety Commission. The country’s active participation in international networks and partnerships, along with efforts to educate its citizens and collaborate with social media platforms, further solidifies Fiji’s position as a leader in this field. However, it is essential for governments to find a balance between regulation and individual rights, ensuring the protection of users while fostering innovation and free speech.

Audience

During the discussion, several key points were raised by different speakers. One audience member expressed concern about the involvement and engagement of civil society within the Internet Governance Forum (IGF). They questioned the extent to which civil society is included and heard in participatory discussions such as the IGF. This raised questions about the room and role for civil society and their ability to influence decisions.

Another speaker highlighted the importance of partnerships and their role in addressing the demands and concerns of civil society. They emphasized the need for the partnership to consider and respond to the voices and needs of civil society, particularly in the areas of peace, justice, strong institutions, and partnerships for the goals.

Doubts were also raised about the effectiveness of voluntary industry associations, specifically in sectors such as automotive, advertising, and digital identity. The audience member noted that voluntary industry associations in these sectors have failed to bring about significant change or address the concerns of stakeholders. This raised skepticism about the potential success of a new voluntary industry association.

The need to strike a balance between government and private sector involvement in regulating the internet was a key point of discussion. One speaker questioned the current system of industry-led regulation of the internet and advocated for a more balanced approach that includes government involvement. They highlighted the example of Canada’s Bill C-11 Online News Act, which required tech companies to pay news outlets for posting or linking content. This led to Meta removing news from their platforms, raising questions about the control that companies have over the digital space.

On the other hand, a speaker argued that less regulation can lead to better outcomes. They referenced the positive effects of the relatively unrestricted early internet and suggested that excessive government regulation can hinder innovation and progress. This viewpoint advocated for self-regulation as a solution, suggesting that businesses should take responsibility for their actions and address any potential harm caused.

Notably, there were contrasting viewpoints on self-regulation between different cultural contexts. A South Korean panel member advocated for self-regulation, while Europe has shifted towards government regulation. This highlighted the different perspectives on how best to regulate the internet and the need for cross-cultural understanding and collaboration.

The enforcement of online moderation rules and regulations was a point of concern, with many customers expressing dissatisfaction. The speaker called for transparency in the enforcement process but also highlighted the impact it may have on revealing business strategies. Striking a balance between transparency and maintaining customer trust was deemed essential.

In terms of partnership expansion, there was a call to bring more gaming companies into the fold and to establish rules and expectations specific to the gaming industry. This recognizes the unique challenges and dynamics within the gaming sector and the need for tailored approaches.

The challenges of information sharing within companies and content moderation were also discussed. Companies have been relatively low profile about information sharing within their functions, but there is a shift towards more sharing while considering trade-offs. Additionally, the stress and challenges faced by content moderators were highlighted through the game “Moderator Mayhem,” underscoring the need for a deeper understanding of the positions and support given to those responsible for content moderation.

The credibility of voluntary industry action in trust and safety was called into question, particularly considering the activities of certain companies in this space. There were concerns that their actions might undermine the overall credibility and effectiveness of voluntary action in ensuring trust and safety.

Finally, a speaker suggested that a non-prescriptive duty of care for user safety would be a better legislative approach. This would involve holding companies accountable for ensuring the safety of their users without prescribing specific actions or methods.

In conclusion, the discussion covered a wide range of topics related to civil society involvement, the effectiveness of voluntary industry associations, government and private sector involvement in regulating the internet, contrasting viewpoints on self-regulation, the enforcement of online moderation rules, challenges in the gaming industry, information sharing within companies, the credibility of voluntary industry action, and legislative approaches to user safety. Noteworthy observations include the importance of considering civil society demands and concerns, the need for balance and collaboration in regulation, and the challenges faced in content moderation and information sharing.

David Sullivan

The Digital Trust and Safety Partnership (DTSP) was launched by David Sullivan in February 2021 to establish best practices for trust and safety online using a risk-based approach. It aims to develop specific standards and practices for companies’ services. DTSP emphasizes the importance of tailoring assessments and practices based on company size and risk. One of its key goals is to prevent internet fragmentation and support a free and open internet by developing international standards on trust and safety. DTSP believes that adopting a risk-based approach and conducting third-party assessments can help achieve these goals. The partnership values the input of stakeholders, including industry perspectives, and aims to engage in broad consultations. DTSP recognizes the significance of independent third-party reviews to provide objective assessments of company practices. It also highlights the changing concept of self-regulation within companies as emerging regulatory regimes are established globally. David Sullivan advocates for greater transparency in online moderation processes and regulations, while also considering trade-offs. DTSP refrains from commenting on specific companies’ activities to maintain industry credibility. The partnership acknowledges previous challenges faced by voluntary industry associations and emphasizes the need for proper implementation and alignment with emerging regulations. It also recognizes the spread and challenge of digital authoritarianism and emphasizes the need for collective action beyond individual company initiatives. Overall, the DTSP aims to establish best practices for trust and safety online by tailoring assessments, considering various perspectives, advocating for international standards, and promoting transparency in online moderation processes. The partnership is committed to driving positive change in enhancing the trust and safety of the online environment.

Nobuhisa Nishigata

In Japan, except for broadcasting, there is no direct regulation of online content by the government. However, there are certain issues that persist, such as cyberbullying, online slandering, and the distribution of pirated content, particularly manga. Despite these challenges, the Japanese government places great importance on respecting freedom of speech and expression.

Measures have been taken to address these issues, including regulations against spam and finding a balance between public safety and human rights. The government acknowledges the need to protect children from online harm and encourages voluntary efforts for software installation and filter optimization. Additionally, discussions have arisen about the liability of internet service providers and their prompt actions in response to harmful content.

There is a positive outlook for the future development of the Digital Trust & Safety Partnership (DTSP) and a recognition of the importance of combating pirated content without direct regulation. Japan believes in learning from successful practices of companies and sees co-regulation as an effective approach to tackle online content issues.

Concerns have been raised regarding public safety and the activities of tech companies. The frustrations of tech companies with government involvement are acknowledged. However, Japan remains committed to maintaining an open and free internet. The commitment of Japanese Prime Minister Kishida and Japan’s support for the Future Declaration on the Internet exemplify this dedication. Additionally, the importance of effective internet governance was emphasized at the G7 ministerial meeting in Takasaki.

Media literacy and caution about relying too heavily on online media and social networking sites (SNS) for information are highlighted. Concerns are expressed about companies lacking journalistic backgrounds and the variation in information depending on the country.

The handling of content-related matters, such as harassment and defamation, as criminal offenses varies depending on the case. Jurisdiction plays a role in determining the approach taken, and for more serious offenses, law enforcement may directly charge individuals. In other cases, private lawsuits can result in sanctions or mitigation.

Nobuhisa Nishigata, mentioned in the discussions, expresses optimism about the further development of digitalization work in the United States. Nishigata supports private-led investment in digital infrastructure and believes the government should act primarily as a coordinator. Japan has already established a basic law concerning digitalization and the digital society, which emphasizes private-led investment in digital infrastructure.

Lastly, there is an expressed interest in a Japanese company joining global partnerships. The importance of partnerships and global cooperation, particularly in relation to the United Nations’ Sustainable Development Goal 17: Partnerships for the Goals, is emphasized.

In summary, while the Japanese government does not directly regulate online content, challenges and concerns persist regarding cyberbullying, online slandering, and pirated content. Respect for freedom of speech and expression is highly valued by the government. Measures such as regulations against spam, finding a balance between public safety and human rights, and involving tech companies in ensuring public safety are being discussed. The future development of the DTSP and the interest in joining global partnerships reflect Japan’s commitment to addressing these issues while maintaining an open and free internet.

Angela McKay

Angela McKay, a technology risk expert, strongly supports the concept of a free, open, and interoperable internet. She acknowledges the desire of global companies to operate in a global market and expresses encouragement toward conversations surrounding this vision. McKay recognizes that collaborative solutions are necessary to address the changing technology and harms landscape. Drawing from her experience in technology risk, she identifies similarities between the discussions around online harm and her field. She notes that governments, civil society, and companies have realized the importance of collaborating to tackle these issues effectively.

In terms of regulation and transparency, McKay believes that these approaches should reflect the cultural values and norms of a region. She acknowledges that regardless of the approach taken, governments represent the cultural values of their respective regions. This implies that regulatory and transparency approaches must be sensitive to cultural variations.

McKay advocates for a risk-based approach to address online harms. She highlights the need for companies to adopt risk-based approaches and emphasizes the importance of considering trade-offs to ensure a safe online environment. This approach allows for a more nuanced and flexible response to the complexities of online harms.

Cross-sector dialogue is another crucial aspect highlighted by McKay. She emphasizes the importance of conversations between different entities, citing examples such as the DTSP (Digital Trust and Safety Partnership) within organizations and the Global Online Safety Regulators Forum between regulators. Through dialogue and collaboration, learning can occur, leading to improved practices.

The exchange of best practices among companies of varying sizes is seen as instrumental in supporting global proliferation. McKay notes that the DTSP has partnered with the Global Network Initiative to involve civil society in advancing the Digital Services Act. This collaboration prevents knowledge and expertise from being confined to only large companies, ensuring that even medium and smaller companies have an opportunity to benefit from best practices.

McKay recognizes that the field of operational maturity is continuously evolving. Companies are constantly seeking out novel methods and practices that have not been previously implemented, highlighting their commitment to continuous learning and improvement.

The importance of exchanging ideas among different communities of civil society is stressed by McKay. It is not sufficient for companies alone to engage in dialogue; the participation of civil society is crucial to ensure a more inclusive and comprehensive approach to addressing online harms. McKay mentions that Google has been actively involving civil society members and academics in discussions on topics like child safety. They are also exploring the use of requests for information and online forums to catalyze conversations and gather diverse perspectives.

Advocating for active engagement with civil society, McKay suggests that companies should proactively encourage dialogue and collaboration among different communities. By bringing in external voices and perspectives, companies can better understand and address societal concerns.

While acknowledging the potential benefits of regulation and transparency, McKay cautions against viewing them as a panacea for all problems. She believes that focusing on what behaviors are being aimed to drive is more crucial than fixating on the enforcement method. This perspective challenges the false dilemma of regulation versus transparency, shifting the focus towards the fundamental goal of shaping positive online behaviors.

The progress made in managing cybersecurity risks is acknowledged by McKay. She highlights the evolution from solely focusing on vulnerability management to a more holistic, risk-based approach. This progress highlights the continuous efforts to enhance cybersecurity measures and protect online users.

In conclusion, Angela McKay’s perspectives highlight the importance of a free, open, and interoperable internet, collaboration to address online harms, culturally sensitive regulation and transparency approaches, risk-based management of online harms, cross-sector dialogue for learning and improvement, the exchange of best practices among companies of varying sizes, continuous learning and improvement in operational maturity, the significance of exchanging ideas with civil society, and the need to focus on driving desirable behaviors rather than fixating on enforcement methods. Her insights contribute to a more comprehensive understanding of the complexities and potential solutions within the digital landscape.

Kyoungmi Oh

The South Korean government is currently making efforts to exert control over content on various platforms, which has posed challenges and highlighted the need for increased transparency. Civil society organizations in South Korea are requesting platforms to disclose government requests for user information and content takedown, a practice that ceased around 2011.

The inadequacy of the SAFE (Safety, Audit, Feedback, and Enforcement) framework in addressing the unique aspects of the digital industry has been noted. The framework fails to consider the importance of freedom of expression and privacy, and the potential harms that occur when content is taken down or censored. This calls for a more nuanced approach to trust and safety that prioritizes protecting freedom of expression.

Collaboration with digital rights organizations and civil society is crucial for effectively managing trust and safety in the digital industry. The Trust and Safety Council of Twitter serves as an example of successful collaboration, incorporating a wider range of perspectives and insights into content regulation decisions. Limited transparency with recognized human rights organizations under appropriate non-disclosure agreements is also seen as beneficial.

Incorporating industry-specific considerations and placing greater emphasis on enforcement and transparency within the SAFE framework is necessary. The current framework falls short in addressing the unique characteristics of the digital industry, with abstract questions that do not cater to its specifics. Clarity on what content should be taken down is lacking, leading to confusion and potential bias in decision-making.

Self-regulation is preferred over governmental regulation, as endorsed by South Korean civil society organizations. However, transparency in the self-regulation process is crucial due to the diverse interests, goals, and missions of different organizations.

South Korea has enacted legislation to address cybercrimes, particularly harassment and sexual abuse. The Punishment Act allows the communication network to punish offenders and provides a legal framework for combating these crimes.

In conclusion, the South Korean government’s control over platform content and the shortcomings of the SAFE framework have raised concerns regarding transparency, freedom of expression, and privacy. Collaboration with digital rights organizations and civil society, industry-specific considerations, and enforcement are essential for effective trust and safety management. While self-regulation is preferred, transparency in the self-regulation process is crucial. Legislation addressing cybercrimes demonstrates South Korea’s commitment to combating online abuse. Addressing these issues will contribute to a more inclusive and secure digital environment.

Session transcript

David Sullivan:
Hi, everyone. Welcome to this open forum on the Digital Trust and Safety Partnership. We are just sorting out one matter technically, and then we will get started. Okay. Am I able to control the slides? Yeah. Okay. Because I don’t know how I ‑‑ so good afternoon, everyone, and good morning, good afternoon, and good evening to anyone who is joining us remotely. I’m David Sullivan. I’m the Executive Director of the Digital Trust and Safety Partnership. We’re thrilled to be here at the IGF holding our open forum. What we’re going to do today is first I am going to tell you a little bit more about the Digital Trust and Safety Partnership, our objectives, the progress we’ve made, our approach to really articulating industry best practices for trust and safety online, and then we’ll talk about what that means for the free and open internet, and we have a terrific panel of guests and experts joining us from across the region, around the world, different stakeholder constituencies, and we will so have that panel discussion, and then we’ll really try to benefit from the expertise of everyone here in the room and save plenty of time for open discussion and Q&A. So with that, I’m happy to get started. I just ‑‑ yeah. Okay. Great. Yeah. Here’s our agenda. Just mentioned. And I’ll introduce our panelists in a moment when we get to that piece, but first let me tell you all a little bit more about the Digital Trust and Safety Partnership. So DTSP launched two and a half years ago in February 2021. We’re really a voluntary industry body that’s come together to articulate best practices for what we call trust and safety. This is a term that is well understood within the industry, within technology companies, where teams responsible for something called trust and safety have often been around for 15 or even 20 years, but it’s a term that is less well understood and maybe is often more thought of within the internet governance space as platform governance, platform regulation, content moderation, online safety, all of the issues around the content and conduct online that we are all concerned about. So our partners have come together to articulate best practices using a risk‑based approach, and I would say that there’s two fundamental aspects to how our partnership works that we want to highlight and start with. The first is that there are many, and I think even within the IGF, probably at other sessions that are going on right now, very important discussions about the sort of normative aspects of how should content be governed and regulated online. What does international human rights law say about this? What does national law and regulation say about this? Our partnership is taking a different point of departure. We are descriptive, and so we say there are practitioners inside tech companies who have been working on these issues. Let’s describe the work that they do. The second fundamental piece is that we are not suggesting that companies offering products and services that are very different from each other, from search engines to social media to instant messaging to dating to video games to sharing economy, the idea is that all of these companies should have their own policies, their own terms of service that are particular to their product, to their audience, but that these companies can use the same practices to address the content and conduct that they do not want to see on their platforms, so we are about aligning companies around practices, not around specific types of content. So, as I said, different risks, different threats for all different types, so there really is no one-size-fits-all approach to trust and safety, and as we know, this is a constantly evolving and changing world. The threats that people are worried about online tomorrow will be different than they are today, and so we need this risk-based approach that is going to evolve over time. Here is just a quick snapshot of our current partner companies, and as you can just see from that quick glance, we bring together companies of different sizes with different business models and very different products and services, but again, trying to align around those common approaches, the common framework, and so here is our best practices framework, which all of our companies commit to, so basically we have five overarching commitments that all of the companies who are partners of DTSP commit to, and they mirror the product development lifecycle and start with product development, so this is really about safety by design and sort of identifying and evaluating content and conduct-related risks in product development, so as I said, this is not particular to child safety, it’s not particular to disinformation, but it can really encompass any of the risks that a company might be concerned about. The second commitment is around governance and adopting the sort of being transparent and adopting explainable rules for their product or service, enforcing those rules in the third commitment around enforcement, improving over time, and then being transparent with the public about how all of these processes take place. So here you can see that underneath those five overarching commitments that I just mentioned, we’ve articulated around 35 specific best practices for trust and safety, and the idea is that companies can use whatever combination of these practices or perhaps identify other practices that are particular for their product or service that they can implement in order to sort of align with our framework. The goal is not really to say that, hey, here is all of the answers to dealing with trust and safety online, but to say here’s a framework, can you find within this what works for your company, for your product, your service. So having best practices is great, but it doesn’t mean anything unless there’s really a robust evaluation and assessment of how companies are using those practices. So the first thing that our organization did after it launched in February of 2021 was to develop a methodology for assessing how companies are implementing these practices, which we published in December of 2021 called the SAFE Framework, and in 2022, our founding companies undertook self-assessments of their own trust and safety practices using this approach. And two things that are fundamental to this approach to assessment, which I think can be relevant to a lot of the conversations that are going on now globally about what to do about these issues about online content. The first is that the assessments are tailored based on risk, so they’re about looking at the size and scale of a company so that we’re not asking a company like Bitly to do the same assessment we would expect of a Google or a Microsoft. And then also look at risk, look at the user volume for a product or service or look at the product features that might introduce levels of complexity or risk that would warrant taking a much more intensive and detailed look at that product or service. And then based on that, companies used this five-step assessment methodology to look at and find out what level of maturity are their products and services. And we actually developed this maturity rating scale, sort of five steps from ad hoc to optimized, and companies used our process to identify where they saw their own practices as less mature or more mature. And because our goal as an industry association is to develop best practices and show accountability, it’s not to be ranking our companies against one another. So in our public report about these practices, which is on our website, DTSPartnership.org, we aggregated and sort of anonymized the results of these self-assessments in order to show what’s the range of maturity for the different practices that our companies are using for online safety. And what we found was here’s where companies saw their practices as being more mature. And these processes, generally speaking, are things that teams within companies responsible for trust and safety or content policy can often do sort of by themselves. Teams that have been working on having policies and standards and enforcing those standards and reporting on those standards, that is where companies tended to find that they were more mature. Looking at where companies saw their practices as less mature and in need of improvement, these are the practices on this slide. And here we can see it’s oftentimes things that involve working with external organizations and externals. So getting input from users on how to shape content policies, working on community self-regulation for the types of services that have that kind of community moderation component, or working with researchers and academics on things like access to data and other programs like that. So this is a snapshot in time that’s now more than a year old, but I do think it sort of gives a sense of where the industry saw itself as doing better and as in need of improvement and something to build on. So where we are now is we’ve shared those results and we are starting to pilot how we can look at having independent assessments, where it’s not the companies assessing themselves, but having independent third-party assessments that can complement or work with or help provide companies with workable solutions for compliance with many of the online content regulation regimes that we’re seeing developing in different places around the world, some of which we’ll talk about in the panel discussion shortly. So just to kind of restate, the objectives for our partnership is really first about bringing companies together to protect people online, protect their safety, and protect their rights. See how these best practices can be supported by governments as they consider their own approaches to content regulation. Grow our membership so that it is reflective of the global world and all of the people who are using these services around the world, so looking for new members from other parts of the world. And look to lay the groundwork for international standards in this space. There’s a number of things that we’ve done recently that I wanted to mention briefly. We’ve just released over the summer an industry glossary of trust and safety terminology in order that our members can kind of align around the baseline definitions for the terms that trust and safety professionals use in their daily lives. Again, that’s on our website, and there’s copies I can share. We have a booth in the exhibition hall where you can also access that via a QR code. We’re also working in multi-stakeholder and public-private partnerships, including with the World Economic Forum’s Global Coalition on Digital Safety, to develop together with civil society and regulators and international organizations some common approaches to things like risk assessment. And we’ve just launched and published a set of guiding principles and best practices for age assurance, showing how our partnership can zero in and develop some practices on specific elements within this broader framework of trust and safety. So as I said, next steps is really looking at those, trying to pilot an approach to third-party assessment, continuing to consult broadly with stakeholders, and support our companies with their efforts towards having really meaningful and transparent compliance with regulations in a way that ultimately makes people safer online. So with that, I think we can move on to the panel discussion here, and I am thrilled to welcome our panelists. So here in the room, we have Nobu from the Ministry of Communications of Japan. We have Kimi from OpenNet Korea, a civil society organization from Korea. We have Angela, who’s the head of Trust and Safety Research and Partnerships at Google and a DTSP board member. And Farzana, do we have our remote speakers as well? Yeah. So online, we have Brent Carey from NetSafe, and we have Rashika from the Online Safety Commission of Fiji. And so really what we wanted to do with this open forum was to, you know, sort of leverage the expertise of the global internet governance community to talk about what does this industry kind of effort on trust and safety mean for a free and open internet, and can, you know, this approach of industry best practices and standards be leveraged to prevent things like internet fragmentation and support the goals that we ultimately want of an internet that facilitates and promotes people’s rights, and while also keeping them safe. So I’m going to start here with you, Nobu, thank you. The Japanese government has been generously hosting us here in Kyoto. It’s been wonderful. So you have really an extensive career in technology policy here in Japan, as well as working at the OECD. And I wanted to ask you, what role do you see in terms of industry practices when it comes to the development of Japan’s approach to regulating online content, and how do you see this potentially supporting or not the goal that I know Japan very much shares of an open and interoperable internet?

Nobuhisa Nishigata:
Hello, everybody. My name is Nobu Nishigara from the Japanese government, and thanks for the kind introduction. And, you know, you guys, everybody is welcome to come here. And then let me say, we also thank you, everybody, for your participation and contribution, which made this event great. So this is now, of course, we did a lot of the job and preparation to host you, but on the other hand, it is not all. Remember that everybody makes this happen. And having said that, and thanks for the question, and then just, you know, before answering directly to the question, let me make it clear that one point that the Japanese government is not engaged in the direct regulation of the online content. I mean, except for the broadcasting. We have some regulations, so do the other countries. We have some regulation on the broadcast content, but on the other hand, we don’t have the direct regulation on the online content yet. But on the other hand, having said that, again, then, of course, we do respect the freedom of speech, freedom of expression, et cetera. But on the other hand, secondly, though, I have to say that there are several outstanding content issues regarding the online delivery or, you know, circulation, et cetera. So to name a few, and then the CSAM, the spam, or maybe it’s outstanding in Japan, the cyber-bullying, or online slandering, or maybe I would say include the piracy, the content, the deliberation, I mean, delivery, the pirated contents in manga. I mean, regarding the manga piracy issue in Japan, it’s heavy.

David Sullivan:
But please find your time to visit the IJH village, and the exhibition downstairs to stop by the booth and present how we can combat without the direct regulation over the pirated contents. But we still fight, and there are some introductions out there, and you can get some souvenirs as well. That’s a little advertisement, because personally, I organized half part of the exhibitions. Then, like, for those issues, what we do is, like, we have introduced several measures for the mitigation. For example, like, we have the particular regulation to solve a particular problem, like, for spam. Like, we have some particular regulation against spam. And we have some particular regulation to protect the children from the online harms, particularly for the CSAM. And of course, we do have some voluntary works as well. voluntary though, but to enable the filtering installation, software installation, some help with the telephone carriers in the smartphone, etc. And for the same purpose, to keep the children from the online harms. And maybe we have some other regulation. I mean, this could be some common practice within, particularly in, I would say, like-minded countries, but limit the liabilities of the internet service providers to enable their prompt action to avoid the online harms to the people. So, like, these are the, you know, particularly applies to the internet, but the technology is so fast, right? It’s a unique to the internet, I would say. We have to review and update almost every year and everything. So, better than losing my job, it’s okay to be busy, but still, the internet kept us very busy, and this is where the best practice role come in, you know. So, we face several dilemmas, particularly between the public safety versus human rights, right? And the public safety is our biggest concern as a whole government. Or maybe the individual personal safety, particularly who suffered from online harm versus the other people’s human rights, which is going to be where to be the balance, you know. So, you know, it’s a bunch of dilemmas that we face when we have to think about these things. But, you know, that this, actually, personally, though, I’m having the great expectation that the further development of this work at the DTSP, because the advanced information, the best practice, at least, I mean, from the company side, this is going to help us a lot, you know. I mean, we do usually, like, for example, maybe, like, the current European legislation, new legislation, like a Digital Service Act, DSA, that’s going to be the one reference point when we think about the new regulation, I mean, compared to what we have. I mean, it’s a different style, but maybe, like, what we can do as a government is very similar. But maybe one difference is, like, we don’t have, or maybe we have not reached to the level of the core regulation in this area. Like, a company makes a commitment to the government, right? Then, like, the government’s going to evaluate the commitment later. And we have that kind of system toward the regulation on the online platform, but it’s about the competition side. Like, I mean, very similar to the mechanism to the DMA. And actually, Japan predated the EU, so maybe EU’s got a better system, I would say, but we still have some core regulation. It’s a first example, a good first example of the core regulation in Japanese society or Japanese market, I would say. So the core regulation, I mean, even though we don’t really have to do the core regulation, because if we, the information from the company, what they are doing as a good practice is at least, and then it’s going to be available, then we can learn from them. Then maybe we can talk to some program having companies that, hey, come on, then take the report, you know? I mean, you don’t really have to, but you have to look at it, and then you can think about fixing a little bit about your conduct, you know? I mean, otherwise, you know, if it gets mandatory, then it’s going to be huge, heavy work for the government and maybe some bad reputation to the company as well. So it’s kind of lose and lose situation in which we don’t want to have in many cases. So that kind of expectation I already have, and then just, you know, I just talked about some of the example of the online content issue in Japan, but, you know, the tour, the open and interpretable internet, and it is not only the content issue, but, you know, there are many, many issues with the internet. I mean, of course, as a government person, I understand that some frustration comes in from the tech company, you know, that, come on, don’t get into the market, you know, government should stay away from it. We understand to some extent, but on the other hand,

Nobuhisa Nishigata:
you know, as I said, public safety concerns us, right? So there could be some, well, I mean, it’s going to be easier if we could draw the line, but the line is not a straight single one, so, you know, we have to keep talking, talking those kind of things. So from that perspective, maybe, you know, this best practice as well, that helps. And let me finish by one more introduction, and then I’m not sure if you are aware, but our Prime Minister Kishida came to this event in day one, and he made some speeches, and, you know, the highlight maybe for this open forum would be that he committed, and of course we have to follow it once he commits, right? He committed it to the open and free internet to maintain. I mean, there are background, some evidences, like Japan is one of the first countries to express support to the future declaration on the internet, or like as a G7 chair this year, like we led the discussion about, it’s kind of a rare case that the G7 ministers get together. I mean, we have the G7 ministerial meeting every year, but on the other hand, we don’t talk about internet governance very much, but on the other hand, if you can, you have the time to look at the ministerial declaration from Takasaki, it’s on April 30 this year, but we very much having the ticks on internet governance, and, you know, the G7 get together, join force to support this IGF event, or like, you know, making some collaboration effort to task force to cooperate, like, you know, UN GDP initiative, those kind of things to maintain the open free internet. So, you know, these things, I mean, we have to maintain our good environment, right? I mean, the government has to, the thing that the government has to do it, we do it, but, you know, the government cannot solve many problems only by ourselves, and we need your help to push these things forward. So, thank you very much. Maybe I should stop here.

David Sullivan:
Thank you very much, Nobu. I think that was great. I’m already starting to hear some themes that I think other speakers will come back to, in terms of the value of the conversation between companies and regulators and other stakeholders, the importance of the leadership of states like Japan, especially taking this to the G7, raising internet governance there. But, of course, there are many states around the world, and this segues nicely to turn to our first online speaker, Rishika Chandra, from Fiji’s Online Safety Commission. So, I think what we know is that while many states, and Fiji I commend for taking the lead as a small island state working on safety of people online, but not all states have the same level of resources and the same heft that either the European Union that you mentioned with the Digital Services Act or Japan might have. But nonetheless, these states are working together and thinking about ways to pool resources and work together to coordinate in terms of online safety, and there I was particularly hoping that Rishika can tell us about the work that Fiji is doing as well through the Global Online Safety Regulators Network in this space. So, Rishika, over to you.

Rishika Chandra:
Thank you. Hello, everyone. Greetings to you all. My name is Rishika, and I’m the Project Officer for the Online Safety Commission Fiji. I would like to take this opportunity to thank the management for making Fiji part of the 18th IGF. Thank you very much. Before diving into the Global Online Safety Regulators Network, I would like to first give a bit of background about online safety as to what we do. So, Fiji is one of the first countries to recognize the importance of online safety and take concrete steps towards ensuring a secure digital environment for its citizens. In 2018, Fiji enacted the Online Safety Act, which paved the way for the establishment of the Online Safety Commission in 2019. Since its inception, the OSC has been dedicated to promoting online safety through various initiatives. One of the primary objectives of the OSC is to raise awareness about online safety among individuals and communities. To achieve this, the commission organizes awareness and education programs that aims to educate people about potential risks and provide them with tools to protect themselves online. So, we partner locally. For example, we have signed a memorandum of understanding with the Fiji Police Force in 2020. They help us to enforce and persecute matters that breaches the Online Safety Act. Additionally, we work with other relevant ministries locally, NGOs, as well as non-governmental agencies to promote online safety and digital literacy. Going on to the international engagements, firstly, I would like to highlight our partnership with eSafety Commissioner Australia. The partnership had been engaged, had been exchanged in 2021. Under this arrangement, the organization works together to support online safety in Fiji and Australia through sharing best practice, raising awareness of online safety trends and emerging issues, develop national online safety strategies, strengthening online safety response capabilities, and working together to achieve mutually beneficial online safety outcomes. Moving on to social media platforms, we partner with Meta and TikTok because in Fiji, there’s a lot of users of Instagram, Facebook, vastly Instagram, Facebook, and TikTok. So, we took this initiative to extend our partnership with Meta and TikTok. We work with them closely. One of the primary ways in which we collaborate with these tech companies is through their content reporting systems. These systems allow users to report any content that they find offensive or harmful. The Commission has been actively using these reporting mechanisms to moderate and take down contents that cause or intends to cause harm to an individual. Furthermore, the Global Online Safety Regulators Network, it was formed back in 2022. It was recently formed. So, the Online Safety Commission, eSafety Australia, and Ofcom are the movers of the network. The purpose of the network is to bring together independent online safety regulators to cooperate across jurisdictions, sharing relevant information, best practice, experience, and expertise, and support harmonized or coordinated approaches to online safety issues. Since its formation, the network has immensely expanded with members from Fiji, UK, Australia, Ireland, Africa, and Korea. The discussion and debates from these networks have played a pivotal role in helping the Commission gain valuable insights into debates into how different countries tackle online abuse and incorporate these safety policies into their laws and how the knowledge sharing platform has provided an opportunity for countries to learn from each other’s experiences, successes, and challenges. Fiji, being a small Pacific country, is making significant strides towards embracing the tech world. However, it can benefit immensely from observing and adopting best practices employed by other nations. By doing so, Fiji can ensure that its citizens are protected against online abuse while fostering a safe digital environment. So, that was about the Global Online Safety Regulators Network. We have recently actually welcomed Africa and Korea to our network. So, basically, we discuss topics around age verification, age assurance. These are some hot topics that we are currently on. Human rights, paper, freedom of speech. So, these are the regulations we usually talk about.

David Sullivan:
There’s like a working-level meetings and a senior-level meeting. I represent the working level and Mr. Jishwari Devi, who’s the Acting Commissioner for the Online Safety Commission, represents the senior level. Thank you. Thank you so much, Rishika. Yeah, so I think it’s really interesting to see how regulators are also sort of thinking about how to come together and where we can find points of interoperability, I think, between what’s happening in the governmental space and what’s happening in the industry space where our partnership works. So, we’re going to ping-pong across the Pacific a little bit and we’ll now turn to Kimmy from OpenNet Korea, a civil society organization that’s really been leading as a watchdog for freedom of expression in Korea. And so, the mention of Korea having joined the regulators network is timely, but I think we wanted to talk about what regulation looks like in Korea and whether you see some opportunities or challenges for the sort of practice-based approach of systems and processes to online safety that we’re promoting when it comes to protecting human rights, particularly in the Korean context. First, thank you for having me and I’m glad to hear that,

Kyoungmi Oh:
you know, Kishida declared OpenNet’s commitment because our president did, either. Yeah, so, but the current situation related to platform companies in South Korea is not good. Actually, it is very bad. The UN government has been trying to control platforms and censor the user-made content under the name of to make a healthy society and foster internet ecosystem. In these circumstances, civil society organizations in South Korea request each platform to disclose the number of government requests for user information and take down contents. I mean, transparency. South Korea has several interesting experiences with transparency. Until 2011, the two big platforms, you know, Naver and Kakao, disclosed a number of government requests for communications data. They published the result on their parency report. This made a huge impact on public opinion and legitimized the platform refusing government requests. I’m focusing on transparency and enforcement when I review this report. Actually, DTSP is an honorable multi-phase initiative by digital companies to enhance the trust and safety of their products. But I also how hard it is to assess different platforms using standardized indexes because each platform is different from each other. Size, earnings, business models, target consumers, and so on. If we can somehow apply all these factors, the result might be different. Here are my comments for improvement as a civil society organization’s researcher. So, it could not be comprehensive. First, I have five comments. First, trust and safety does not take into account the human rights harms when contents are taken down or otherwise censored. Trust and safety is defined in terms of content and conduct-related risk, which is in turn defined as illegal, dangerous, or otherwise harmful content or behavior. The way it is defined, only the contents, not their takedowns, are deemed as causing risks. These are sufficiently protect one important human right, freedom of expression. If one can be harmed directly by another’s contents, that is because it causes mental distress or on the subject or audience. However, censorship can also be dangerous if discerning voices are removed. For instance, in a society charged with religious hatred, majority leaders or the government’s disinformation can trigger violence on the minority, and censoring minorities’ leaders will further weaken them. This is especially important because digital authoritarianism is on the rise. The governments are becoming more and more the source of harmful disinformation and harmful censorship. Second, DTSP’s SAFE framework is so well thought out that it seems adaptable to any country, any industry, not just the digital industry. I can clearly see the same iteration of development, governance, enforcement, improvement, and transparency being very important to the trust and safety of the pharmaceutical industry, for instance. But I am then worried whether the framework sufficiently focuses on the unique aspect of the digital industry, such as freedom of expression or privacy. These three industries have formed a liberating and equalizing core of human civilization. Search engines and platforms have provided powerless individuals with the same power of information and mass communication formally available only to big companies, government, or latest media. Can we define trust and safety in ways that protect the unique civilization significance of the internet, or will DTSP become the numerous consumer product safety initiatives? I think that the success of DTSP lies in whether we can answer these questions correctly. Third, fortunately, one way to strengthen the connection to the unique significance of the digital industry is already reflected in some of the 35 best practices. That is collaboration of digital rights organizations. Now technology has been welcomed as much as internet. It was met by new wave of numerous organizations dedicating to protection of the internet. These organizations and companies have common goals only if the companies allow, deviating a little from their private motives. I’m sorry. DTSPs ask companies to work with organizations in the process of product development, product enforcement, product improvement. However, you need the same element in transparency as well, I think. Actually, without transparency, communication during PD, PE, and PI may not be meaningful. Yeah, limited transparency with recognized human rights organization under appropriate non-disclosure agreement can be very helpful in adding context and nuances to content moderation while not risking abuse by bad actors. Twitter’s, yeah, I’m so sorry, it’s now become X. Twitter’s Trust and Safety Council did this relatively well. Yeah, sharing much more information about new products, enforcement, et cetera, with the civil society. This will answer the other question in DTSPs posed about the difficulty of maintaining transparency while not revealing information that can be used by bad actors for abusing purposes. Limited transparency with civil society group must be explored more. Fourth, the safe framework with 35 best practices, 45 questions is too abstract and procedural. Instead of defining what content should be taken down, the safe framework asks the following questions. How are the content reviews prioritized? And what factors are taken into consideration? What types of tool or systems are used to review content or manage it or review process? What types of process or mechanisms are in place to proactively delete potentially violating content or conduct? Asking these questions is not a problem in itself, but it is hard to evaluate the safe framework based on these open-ended questions because we don’t know how content reviews are possibly prioritized and what possible tools, systems are in place. I think the question should be phrased in yes or no format and should be reflected in the industry’s unique aspects. Yeah, just one more question. Yeah, fifth, DTSPs ask the following questions. DTSP is considering whether some commitment or best practices should be given greater consideration than others when conducting assessment. I think that product enforcement and product transparency are the more important because that is where the rubber meets the tires. That is where the products are in direct touch with the users. What is lacking in development, improvement, government can be compensated by rigorous enforcement and transparency. I should be closed. Thank you for listening.

David Sullivan:
Thank you, Kimi. That is incredibly valuable feedback on some of the real detailed aspects of our framework and also a helpful reminder of the wider human rights context and the importance of that, particularly I think in the world of trust and safety, there’s often a kind of we need to take more things down and we need to think about the consequences of when we take things down as well for the rights of all. So with that, I’m gonna turn to our other remote panelist, Brent Carey from NetSafe New Zealand. Brent, and wanna make sure we save plenty of time for Q&A, so I’ll ask folks to be brief. But Brent, it would be great to hear from you how NetSafe New Zealand is working at the local context and how you are sort of bridging this kind of local context to sort of global company tension that others have already spoken to. Yeah, thank you very much.

Brent Carey:
And thank you to Japan again for Host Nation. I was last in Kobe at the ICANN and I wish I could be with you in person. Obviously, New Zealand has a commitment to online safety. And in New Zealand, we passed the Harmful Digital Communications Act back in 2016. And importantly, that act actually has a number of principles because some of what people have talked about is how fast-paced technology is. And so in the Harmful Digital Communications Act in New Zealand, we have a principles-based approach to dealing with online harm. And those principles cover topics like incitement to commit suicide, breach of confidentiality, harassment, and all sorts of other online issues. And in New Zealand, that scheme has both a criminal and a civil side. And importantly, NetSafe has been approved by the government as an NGO agency to deal with the civil side of tackling online harm. And more than 25,000 New Zealanders call on NetSafe every year to assist them to mediate and resolve disputes between victims, perpetrators, and also platforms. And so more than 7,000 people each year go through our mediation process. And that’s something that is quite unique globally to have an ADR scheme, which is looking to resolve issues between perpetrators and victims and platforms. And importantly too, that’s our local approach. And in our Act, it is a requirement for us to work globally and work with platforms to try and address issues. And we’re really heartened too by this initiative to look at a risk-based approach because a lot of what we’re doing is to look at novel approaches for tackling emerging harms. And I just wanna give one example of that novel approach because I wanna really get into the conversation. NetSafe took the lead to convene the platforms two years ago to think about a voluntary approach to looking at some of the more emerging and edge case harms. Those harms are like hate speech, disinformation, and misinformation. And for more than two years, NetSafe convened a forum, consulted with different stakeholder groups, and eventually in July, 2022, we launched the Aotearoa Online Safety Code, which is a voluntary code with five platform signatories, TikTok, Meta, Amazon, Twitch, Twitter, or X, as we know it now. And those platforms have agreed to look at risk-based approaches to what they’re doing in relation to those emerging areas. And importantly for New Zealand, if you go to thecode.org.nz, for the first time we’ve had localized data that has been provided as a result of this voluntary initiative. And just in closing, this is an emerging landscape, the online safety regime, and we’ve talked about the European approach. It could also be a US approach or geopolitical approach. And here in New Zealand, we’re not immune to that. And we have a discussion paper called the Safer Online Services and Media Platforms Bill. I think we’re sitting back to look at how the world is thinking about regulating this space. And that’s also in play to look at what content regulation should look like. Importantly in that discussion paper, it says what is already illegal and what is already harmful or objectionable won’t be looked at. It’ll be looking at other regulatory gaps. And importantly too, in New Zealand, we want to participate in forums like this. And NetSafe2 is an observer of the regulators forum, along with members that are on this panel, Fiji and Korea. We’re pleased to be able to join that forum again this year too. And so we’re trying to learn from best practice as well, and also share our knowledge with the world. So thank you for the opportunity to just give that brief introduction.

David Sullivan:
Thank you, Brent. Really grateful to have your contributions and sorry that you and Rishika are not able to be with us in person. So I now want to turn to Angela from Google who wears both a DTSP hat and a Google hat, but to really both perhaps, tell us what may have resonated or not with some of the comments we’ve already heard from the other speakers, and also help set the stage for a conversation with the other participants here in the room. Happy to.

Angela McKay:
And I’d like to, like my colleagues here, thank Japan for hosting us here in Kyoto and for hosting the IGF. I’m really encouraged when I hear the conversation about a free, open and interoperable internet. That is very much so what I would say, not just Google and the DTSP members are looking for, but I would say many of the companies around the globe, whether it is Naver here, or whether we’re talking about different companies, we are wanting to operate as much as possible with a global market. And so maybe just briefly about me, I’ve spent about 25 years in technology risk. And so was in the world of cybersecurity for about 15 or 20 years before coming over to trust and safety. And what I wanna reflect on here is that many of the challenges that I heard highlighted and many of the solutions I heard highlighted by the panelists here are similar to what I feel like was going on in a cybersecurity conversation 15 years ago. At that point in time and where we are now, both governments, civil society and companies have realized that we need to work together to address online harms. And what I really see is governments working to figure out kind of how to do that, right? Oftentimes they may have existing harm-based frameworks in specific areas. Yet at the same time, they’re realizing that technology is moving so quickly that you might not always have, if you have harm-specific focus, that you have new harms that are coming up and changing over time. And so how to deal with that, changing of the technology landscape and the changing of the actual harms landscape is very similar. One of the other things I’ll note is regardless of the approach, governments are going to reflect the cultural values and norms that are in their environment. So from a company point of view, we can recognize that there are going to be regulatory-based approaches. There are going to be transparency-based approaches. But I think what we are really looking for, and one of the reasons I was so happy to join DTSP and the colleagues here is really these risk-based approaches that think about how do we approach this environment where there are trade-offs, right? We are trying to, all of the representatives from governments up here are trying to ensure a safe online environment. Yet at the same time, I think as Kimmy really noted, there are trade-offs that happen. And so one of the things I just want to highlight and then I’ll open it up to discussion is really the importance of these kinds of conversations. Inside of DTSP, companies need to talk to each other because we are actually learning and improving practice by doing that. We’re learning from each other. I’m so encouraged when I hear about the Global Online Safety Regulators Forum because y’all need to do that too, right? And then we also need to have that conversation with civil society. So I think it’s really important. It’s not just a multi-stakeholder conversation, but also bilaterals between different types of entities such that we can collaborate and really draw forward practice overall. I think I will just pause there so that we can actually open up to conversation, but a few comments just reflecting some of what I heard across the panel.

David Sullivan:
Thanks, Angela. So I think with that, we have the less-than-optimal situation of the mic in the center of the room, but we really wanted to take this point to open it up for questions and discussion and really have as much of a roundtable conversation as we can have in a room that lacks a roundtable. So questions. Nick.

Audience:
Hey, thank you for the overview. And look, I’m super happy that you all have this great space to talk to each other. But I guess at IGF, I have to ask the question, what room is there for civil society? And I was wondering whether I could invite you all to reflect a little bit on these five points about particularly where you might go in the future with the partnership, how you might respond to some of these concerns and some of the demands, I guess, for civil society to be more engaged and to learn more, just like you all are learning more. Absolutely. So I’ll go first on that one, and then welcome if Angela wants to come in.

David Sullivan:
We’ve also, you know, we’ve heard from civil, yeah, I think that Kimmy has made some really great points from a civil society perspective and welcome comments from others as well. I think part of the genesis of the partnership was the need that actually first step, we need companies to talk to each other and sort of come together and think. And so there’s been a certain amount of kind of that preliminary phase, I would say, and also recognizing that there is value in saying that that multi-stakeholder conversations are essential, whether it’s here at the IGF or in other fora, but there is also value in sort of constituency specific initiatives. So we have been deliberately not multi-stakeholder while seeking to consult widely with stakeholders. And so many of the points that Kimmy made were from a public consultation we did when we released the SAFE framework. We did a public consultation around the trust and safety glossary that we issued earlier this year. And we’ve actually authored an article for the Journal of Online Trust and Safety at Stanford about why and how we wrote that article. Farzana, myself, and Alex Fierst wrote that, in which we responded to, I think, some of the points raised by folks who contributed to that consultation from academics in Argentina at CELE or at CIPIC in Canada, as well as folks from Ofcom and eSafety who contributed their thoughts. So I would point to that as kind of like how we’re thinking about the process. I think, ultimately, for us, it’s important that we keep this industry perspective, but have that be a contribution and part of the discussion with all the other stakeholders. And that’s why we’re here, and it’s why we’re at the World Economic Forum, working with that coalition and in other places. But it might be useful to hear, I don’t want to put anybody on the spot, but also to hear from, give me in from. Maybe a couple of things I’ll add just to David’s perspective. Because, again, we do think it is really, really important to make sure that the companies are exchanging best practices.

Angela McKay:
And I think this is… is something that I’ve seen as kind of raising the tide for all the boats. And in particular, you have some big companies in the group, but you also have medium and smaller companies. And I’m hearing more and more about the importance of these kind of practices for global proliferation. But I think DTSP has, and then I’ll speak to a company perspective, in the advance of the Digital Services Act being finalized, they actually did have one conversation partnered with the Global Network Initiative to bring in civil society to work to kind of gain some perspective. But let me just be clear. This is a maturing area for these companies. If you looked at the maturity model, I feel like David actually had, it is a maturing area. And so we’re working to figure this out as well. I also think it’s interesting that there is this kind of regulatory role. And I feel like some civil society is like, hold on. How are we supposed to be fulfilling this role? So when I think about it, from a Google perspective, we have been bringing in folks to, for example, our location in Dublin on specific topics around, for example, child safety, and partnering with existing institutions that have great reach into the civil society community. But we’re also thinking about other methods that haven’t been done before. Because it’s not like everybody can go to one particular location, afford to fly there, and then have a conversation. We’re thinking a lot more about how to use things like request for information and online forums to really catalyze this conversation. And one other thing that I’ll just say is we’re also trying to do a little bit of exchanging ideas among different communities of civil society. Because there are, so I manage a program of the priority flagging on a global basis. And one of the things I’ve noted is we have a lot of different folks who are in the civil society and academia who are in these programs. But they haven’t yet talked with each other. And so one of the things I’m trying to do is go, hey, there’s actually practices that you could exchange among these different harms areas that would be really useful to helping mitigate these harms. And so we’re really also thinking not about how to just get the insights to the companies, but also how to catalyze that community to talk with each other. And I would welcome any of your thoughts after the event as well.

David Sullivan:
And let’s just make sure if Brent or Rajika want to come in that we keep the, and if there’s any questions or online participants want to make any comments. But other questions or comments from folks in the room. And it would be great if you could just introduce yourself.

Audience:
I was about to, not my first rodeo. I’m Sharon Polsky from the Privacy and Access Council of Canada. You’ve been at it for 25 plus years. I got you beat by a decade plus. So I’ve seen a lot of things in government, in private industry, across Canada, and lots of beyond as well. You say this is an industry association, a voluntary industry association. I’ve seen the voluntary industry associations in automotive, in advertising, in digital identity, in a range of others where participation like yours is voluntary. Frameworks are published with great PR and fanfare, lots of money, lots of people, lots of presence. And it sounds great. And one after the other, they fall by the wayside. Companies say it’s a really good idea, gung ho on it, but excuse us, we’re not going to participate. It’s a lot of talk and no action. Why should anybody have any trust that this is going to be any different? It’s a very good question. What I would say is that the proof is in the pudding.

David Sullivan:
We are adding our members to our organization. Just this year, we’ve added TikTok and Twitch. So we’re talking about the key players and more to come. And ultimately, all of this is only as good as it is implemented. But I think what is critical is that this is a space that is no longer just a place where companies are doing their own thing and self-regulating just amongst themselves. We now have this emerging regulatory regime. And it’s incumbent upon all of us to think about how do we, within that context, try to make it meaningful. So I think that’s part of what we’re doing right now, is thinking about how does this set of industry practices relate to the requirements that companies have under emerging regulatory regimes in Australia, in Singapore, in the European Union, in the UK, and in other places. And how do we make sure that those regimes actually serve their intended purposes and actually keep people safer online, while also protecting and respecting people’s rights? I don’t know if any of the other panelists would want to come in on that or add other thoughts.

Kyoungmi Oh:
Yeah, actually, you criticized the self-regulation. But actually, in South Korea, as a civil society researcher, organization activist, actually, sometimes often get criticized, because we are usually claiming the self-regulation, not governmental regulation. But I need this self-regulation. Civil regulation is better than governmental regulation, because governmental regulation has a lot of side effects. So yeah, that’s why we are criticized by South Korea. Yeah, the South Korean society. And I agree with you that the way we are talking, gathering, and the civil society organization, and activists, and the companies, and a lot of people that are gathering, the way of gathering is not changing a lot. But I think that is fundamental. Yeah, it cannot be changed. But I think the transparency is also important, either, in this point. Because each civil society has another. Actually, civil society organizations are not same. They always have different interests, and different goals, and different missions. And so I think more and more, and the best you can gather, the many numbers of civil society, you try to talk with a lot of civil society organizations. And if you make the research, or third-party assessment, and put them the research transparently, how many civil society were answered, and how many civil society organizations

David Sullivan:
are participated in that kind of thing? Yeah. And I think Brent wanted to come in, as well, on this. Yeah, thank you. I think it’s a great question. I also think we’re all part of an ecosystem. And so I just think that’s really important.

Brent Carey:
Because the whole ecosystem has checks, and balances, and holds people to account, as well. And I think in the safety space, it’s which part of the infrastructure, or the internet, are we asking for the interventions to happen? And because of the different, is it content-based? Is it technical? And so I think all of that has a role to play. And if the system is working well, it’s creating the spaces, and places to have those different parts of the infrastructure coming together to have those conversations. And so I do think it’s important, if it’s industry-led, that there are those spaces, and places to have those different voices.

David Sullivan:
Thanks, Brent. Other questions? I do have a question, plus some comments, as well, as a consumer of the internet.

Audience:
My name is Jenna Fung. I am a casual policy observers based in Toronto, Canada. I’m originally from Hong Kong. So earlier, we touched on a little bit about regulations. Because of my background, I have a mixed feeling about having government-driven regulation legislation, as well. But I also questioned about having industry to lead and regulate all this privately-owned public space. At the end of the day, it’s a business. There are things that interest a company or organization. Is there a way to make sure consumer rights are involved in the process, so it’s reflected into all this policy? Just naming one example, recently, in Canada, with this Bill C-11 Online News Act, they require tech company like Google and Meta to pay news outlet for posting or linking the content. And Meta responds in August. They claim that they are to comply with the law. So they are removing news from their social media platform, including Facebook and Instagram. I moved the country to buy me some freedom to see news of whatever perspective. I can’t even use social media to see both domestic and international news. So I want to bring this up and see how everyone’s feel about that, especially when we are in a digital space where it’s predominantly lead and governed or regulated by big tech. So I just want to throw that out and then see how everyone’s thinking about this.

David Sullivan:
Thank you. So does anybody want to respond to that particular? I will say, I think, one, thank you for the question and the comment. It’s an important issue. I think, luckily for us, there are numerous representatives from both Meta and the Canadian government at the IGF. And there may be other sessions where there’s folks who are better positioned to respond to that particular issue, which is a contentious one. But I do think that the recognition that, notwithstanding the challenges of how governments of different types of governments have been using regulation, whether to accomplish political repression or with the unintended consequences for people’s ability to exercise their rights, is an important thing to bring up, the digital authoritarianism that was mentioned is real and spreading and a challenge that we all have to grapple with. At the same time, there is also a recognition that companies doing things on their own is not sufficient. And I do think that that’s one of the reasons why we’re looking at both how do we have a independent third party review of what companies are doing, so that companies are not checking their own homework, while also figuring out how not only our industry efforts, but I think also the perspectives of civil society organizations and other experts and users can inform the development of the kinds of international standards that we need in order to support a more mature ecosystem that is both protective of consumers’ rights and users’ rights, while also respecting freedoms as well.

Nobuhisa Nishigata:
Shouldn’t be long and the time is coming, right? But I mean, I noticed the issue about the Canada and maybe should be sending some condolences to those who suffered from the volcano, those kind of things. But just one comment. It’s a Japanese proverb, old saying that the most expensive thing is provided by free. It has many, many meanings. And particularly, I have some MBA background. So particularly, when the company’s conduct linked to the stock market or short-sighted, long-sighted, those kind of things, many, many things come together. Then in me, I can explain most of the behaviors from the management, the business, schools kind of theories. It’s a nuance. It’s a traditional strategy type of things. Like if you’ve got a short-sighted for the stock market, stock price, those kind of things. And I mean, I’m not going to sit too deep, but maybe that has a history, right? So or like you are paying something somewhere, otherwise getting free for something, right? So then you shouldn’t expect too much about that somebody is giving you for free. So it’s more like a literacy thing. Then it’s our common question to solve that we started rapidly depending too much on like SNS or these kind of online media to get our information. But you have to be careful about they are not having the journalist background for that. We are not sure. It depends on the country how much you can trust about the journalism in your country. But still, there is a difference. So I stop here.

Angela McKay:
I was just going to say very briefly, thank you for highlighting this. And I think while I don’t know the specifics, I’ve heard about it largely, I think you’re highlighting again the challenge of trade-offs, right? And that oftentimes there are unintended actions. I think the one thing that I want to challenge is the idea that regulation or transparency is right. I actually think that when we think about the risk-based approach and making sure that you have the right set of stakeholders involved in the conversation, that is the kind of approach that ends up being effective. It can be done with regulation. It can be done through transparency. But ultimately, getting to that risk-based conversation with the right set of stakeholders is, I think, what really does help drive a difference. And I’ve seen it in, just to comment to the world of 25 years of history. While a long time ago, we were talking just about vulnerability management, that has improved, right? A long time ago, we were talking about what is risk-based approach to cybersecurity. And that has also changed over time. And so one of the things that’s helpful with being a gray beard, if you will, even though I don’t have a beard, is really having that perspective of change over time. It doesn’t happen fast. And I think that’s really frustrating. And I’ll also just say, guys, we all still see the risks. And so there’s a reality that even as we are managing risk, risk is evolving and changing. And so I think it can look hard, like nothing has happened. But change has happened.

David Sullivan:
It’s just we’re managing new and changing risk over time. Brent, I think you wanted to come in as well. Yeah, I was just going to empathize with the questioner. Thank you for it.

Brent Carey:
Because here in New Zealand, we’re in the same experience as Canada. We have the Fair Digital Media Bill, which is about bringing bargaining power to citizens. And we are watching the platforms to response in Canada and thinking, what does that mean for New Zealand? And it is interesting because previously, it was a voluntary landscape where lots of the platforms had negotiated with the media, local media, to actually have an exchange of money in order to support the local media industry. And we’re still doing that. And we’re still working out, well, what was the gap there? Because it seemed to be working well. And then the regulations now stepped in. And it still is just a bill. And yes, these platforms are then saying we’re going to withdraw. And we’re going to stop the news. And so we’re concerned about that from a civil society perspective. But we’re also thinking, well, it’s really important to have media plurality. And that’s what we’re struggling with. We need to have more media sources and have more people media literate. So it’s a really great question. And, um, you know, a lot of countries media landscape is changing because of old media and new media. And I think that’s what we’re all grappling with. So, um, you know, I’m watching the Canadian experience myself personally, um, with interest.

David Sullivan:
Thanks Brent. So, um, I want to come back to the questions in the room. Thank you for your patience. Thank you. My name is Zahid Jamil.

Audience:
I’m an attorney. Very quickly, it’s, it’s, if I look back and say, what if in 1996 or thereabouts, somebody decided to say we should have an internet act, my goodness, this is a terrible thing. It could be used for cyber crime and God knows what else. Let’s just put up, put on the brakes and stop it because regulation is very good. Thank God we didn’t do that. And the lesser regulation led to very good, you know, I don’t even have to explain what goodness came out of it. But now we’re, it’s, it’s, I find it interesting that the example that was just given about how businesses had to respond to a government regulation that led to the businesses being able to do what they wanted to do, right? The criticism of the business did something, but let’s not forget it was because of a government mandate. It was because of regulation to, for their liability to be shielded, they did what they did possibly. And so the harm that regulation is causing is something we should also discuss. And I find it ironic that we went around the world from the West and said, you know, Asia and everybody else should understand that we should have self-regulation. It’s really good for you. And today on this panel, we have someone from Asia saying self-regulation is good for you from South Korea, whereas we’re seeing something else come from Europe. It’s just, it’s an interesting dichotomy. I just wanted to sort of underscore it. Thank you for the time.

David Sullivan:
Sorry. I think Rashika also wanted to come in, so we’ll go back to her.

Rishika Chandra:
Just my comment on the previous question, we’ve noticed that governments around the world are increasingly imposing restrictions on online content and data privacy, while some regulations may be necessary to protect users from harmful content or cyber threats, they should not infringe upon individuals’ rights to free speech or impede innovation. And I think that’s one of the key beliefs, because so far we haven’t been to that extent. Best practices should strike a balance between regulation and freedom of educating of transparency policies that respect users’ rights. So, thank you. Thank you. So we have a few minutes left. Farzana Badi, I’m in the role of remote moderator. I’m very opinionated, so it’s very, I have to clarify that. This is not my question. I’m doing the remote moderation for this session. So we have a question from Rohana Paliagoro.

David Sullivan:
Has any of the online safety acts defined content-related matters, such as harassment, defamation, et cetera, as criminal offenses? Yeah, I think that that’s a, it’s a great question that is complicated to answer, because there are so many different jurisdictions taking different approaches to that. So I’ll open it up to see if anyone from the panel or anyone in the room would want to add some expertise there.

Nobuhisa Nishigata:
But let me just, and generally speaking, it just depends on what the case would be. Like, the bad ones, of course, like the law enforcement can make the case to charge. But on the other hand, some cases, the law enforcement couldn’t make the case to charge directly. But on the other hand, some private lawsuit, and then they got some sanction or mitigation, those kind of things. So it totally depends, I would say. Thank you. I was just going to say, I think that that really exists in terms of areas where, in

Angela McKay:
the physical world, there has been an idea of criminality, and those laws are able to extend into the digital world. So child safety is an example. I think where you have the offline harm also being exacerbated in the online space, where there is existing law, is where I would say that currently is.

David Sullivan:
Thanks, Angela. And I think Brent wanted to come, and Rishika. So we’ll, yeah, Brent and Rishika. Just quickly, in the New Zealand context, yes, under the Harmful Digital Communications

Brent Carey:
Act, it’s a criminal offence to post an intimate image without a person’s consent. And a New Zealander risks either a $50,000 fine or two years imprisonment. So that’s a type of harm that is criminal, and the police do prosecute those matters.

David Sullivan:
And Rishika.

Rishika Chandra:
It’s the same for Fiji. We do criminalize harassment, cyberbullying, image-based abuse, child exploitation. And we have, if it’s an individual, they can have five years of imprisonment or $20,000 fine. And if it’s a corporate, if it’s an organization, then it’s $50,000. It was also mentioned about the defamation. So defamation is not covered under this Online Safety Act.

David Sullivan:
But Fiji does have their own defamation act, which is separate. Great. Thank you. Terrific to get the specific answers to that from some of the countries that are represented here on our panel. So I think we probably have time for maybe one or two more questions. So back to the room. Yeah, thanks.

Audience:
I’m Chen. I’m from ISOC, Taipei chapter. And also, but today I’m speaking on behalf of a consumer perspective. As a Gen Z, I’m a gamer and a casual memer. So apparently I’m a target audience of this product you’re having in this alliance. So I have a really question. Because lots of online moderation is going on on this product. But I think in these days, a lot of customers of this product are not very satisfied by how this online moderation rules and regulations are enforced. We know, we understand these partners from the private sector are very key actors of how to enforce this kind of online moderation stuff. But on the other hand, if you are going to keep the transparency of how the regulations are going to work and the process to reveal to our customers, it might also hurt your business because it might reveal your secret sauce of your business. So I was trying to ask how your alliance is trying to find a balance between the transparent and the trust of your customer. And how to make your partner can get on the board of this stuff. Thanks.

David Sullivan:
So I can take a first stab at answering that and then welcome comments from others. It’s a great question. It’s a very good point. And I would say to just right now within our partnership, we have companies like Microsoft that are gaming companies. We have companies like Discord and Twitch that are where gamers congregate or are streaming. And I think there’s an opportunity to bring more companies from that space in. And to say here’s what this framework means with more specifics for gaming. Same thing for dating, for sharing economy, for different places. The question of sort of how much you share and how much you hold back is a very good one. I think for a long time that this function inside companies has been very quiet. And very low profile. And it’s not just because you don’t want to have bad actors be able to say, oh, okay, we figured out how to get around that. But also because sometimes the employees working in these functions become the subjects of harassment when people do not like what’s happened to their content. Or it’s because of privacy considerations or other considerations. And I think there is a need to sort of shift to be more err on the side of sharing more while being conscious of all of those tradeoffs. Someone has said that trust and safety is tradeoffs and sadness. And one thing I will say also and then we’ll have one more question and then close out is for folks who are interested who are gamers, I would recommend a game that is I believe it’s available in the Apple App Store, I don’t know if it’s on Android yet, called Moderator Mayhem that friends and colleagues have made up where you are in the role of doing content moderation inside a company. It is one of the most stressful iPhone games I have ever played. But it also gives a sense, you know, it’s easy to say, oh, these are gigantic companies and they have the resources and they should be able to solve this. But some of these things are just a perpetual challenge for all of the reasons people have said

Audience:
and that game is a very good illustration of that. So I think we can do one more question and then we’ll wrap. Hi, thank you. My name is Andrew Campbell. I run a public policy, public affairs consultancy. Just two very short questions. Does the sort of activity of X or Twitter in this space undermine the entire industry and its credentials for sort of voluntary action in trust and safety? And then is a non-prescriptive duty of care of users a good legislative approach towards this? So I can answer the first question to just say that not in the business of commenting

David Sullivan:
on particular companies. However, X, formerly Twitter, was a founding member of our partnership and is not a member of our partnership at the moment. On the second question, I don’t have to answer that question because our forum is not a lobbying organization. So we are not taking positions on legislation. But I imagine that some of my panelists may have answers on a question that could easily be its own whole session. So maybe if folks want to give last, any concluding thoughts on that or anything else and then we’ll wrap. So we’ll do rapid fire across the panel here and online. Actually, I missed the chance to talk about the legislation in South Korea.

Kyoungmi Oh:
Actually, we also have the Punishment Act of Harassment or sexual, something around that. Actual communications network can punish the people. Criminal act, yes. I think this place and this opportunity can talk about how we are different and what we are heading for. And I think the transparency report, your report, DTSP report,

Nobuhisa Nishigata:
can be concluded very well, hopefully. Go ahead. Just my final comment is I’m looking forward to seeing the further development of the U.S. work. And hopefully, actually I talked with him about this, but hopefully some Japanese company will join in the partnership. It’s kind of uncertain things. But still, anyway, it’s a voluntary thing. To me, from the Japanese government perspective, of course, in the end, things get worse and we have to do things, particularly for the regulation, those kind of things. But from the beginning, in Japan, we have the basic law about the digitalization and the digital society. It says, it articulates that the investment to the digital infrastructure, digital society, is private-led, should be private-led. The government is more like a coordination, those kind of things. We have some list of the things that the government should do. So in my mind, as a Japanese government, the bottom line is just let the private go first and we follow, or we catch something. So from that perspective, I just want to see the development in the near future. And let me end congratulating so far the process and launch the report. So stop here, then.

Angela McKay:
Before we go over to our online panelists, I’ll just say, I think, again, the premise of it’s either regulation or transparency is a false dichotomy. I think we really have to think about what behaviors we’re seeking to drive, and then you can get to whether that enforcement mechanism should be regulatory or not. But the what that I think is really important that DTSP is contributing here is a way of approaching online safety. And so we can just argue about regulation or not regulation, but I think the conversation that needs to move forward is less on that side and more on the what we are seeking to drive.

David Sullivan:
So then, thank you, Angela. Now I think we can go to Brent. Thank you.

Brent Carey:
Yes, we used to sit on Twitter’s Trust and Safety Council previously. So our comments are in relation to who’s actually trying to drag the lowest common denominator up and lead. And even though we’re not on the Twitter Trust and Safety Council anymore, X is a founding member of our online safety code and remains in the code. And whilst they’ve removed themselves from the European disinformation code, they remain an active member and are providing localized data. So I think it’s easy to call out particular platforms or particular times that they’re not being a good corporate citizen in the eyes of particular stakeholders. I think it’s on us to try and drag everyone in the industry up to the highest common denominator. And so trying to get them to lead, because there’s many other platforms and many other messaging apps that are not even part of any sort of move to actually try and show best practice. And I think it’s on us to actually try and work out where they are and try and bring them along to improve the whole ecosystem. And the Judaic here is a really interesting argument, which I could talk about forever and I’m not going to talk about it. But I think it’s a very interesting approach to this issue and emerging.

David Sullivan:
So I think it was a good thing to talk about. Thank you, Brent. And Rashika, did you have any final thoughts, two sentences? So Fiji, being a small Pacific Island country,

Rishika Chandra:
we don’t actually have a lot of apps being used by our citizens here. For example, Twitter. You wouldn’t believe that Twitter is not really an issue in Fiji, like we don’t have much issues from Twitter. But we do have issues with Meta and Instagram, there is Snapchat and there’s all others. So recently there are a few new apps such as Discord, Line app, which we actually did not hear about. And nowadays we are getting a lot of issues on that platform. So we think that we really need to build that relationship with the social media platforms to learn more about how they design their policy, community guidelines and everything. First, we should understand before regulating it into our policy, we should understand how these platforms work, because we know that these platforms are dangerous at times, but it does act as a platform, as a connectivity to all other people out there. So yeah, maybe we can have a balance there if we work hand-in-hand in a collaborative manner. Thank you.

David Sullivan:
Very well said. So I think that’s a great note to end on. We have a sign-up sheet, we have a booth in the Village. Come talk to myself or Farzana to learn more about what we’re doing. My thank you to all our panelists here and online. Thank you everyone and I hope everyone has a good rest of your IGFs. Thank you.

Angela McKay

Speech speed

191 words per minute

Speech length

1634 words

Speech time

513 secs

Audience

Speech speed

168 words per minute

Speech length

1228 words

Speech time

439 secs

Brent Carey

Speech speed

153 words per minute

Speech length

1387 words

Speech time

546 secs

David Sullivan

Speech speed

159 words per minute

Speech length

5984 words

Speech time

2253 secs

Kyoungmi Oh

Speech speed

135 words per minute

Speech length

1429 words

Speech time

635 secs

Nobuhisa Nishigata

Speech speed

161 words per minute

Speech length

1285 words

Speech time

480 secs

Rishika Chandra

Speech speed

145 words per minute

Speech length

1132 words

Speech time

469 secs