Next-Gen Education: Harnessing Generative AI | IGF 2023 WS #495

9 Oct 2023 07:30h - 08:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator

The Digital Trust and Safety Partnership brings together technology companies, including Microsoft and Google, to develop industry best practices for trust and safety. This collaboration aims to create a safer online environment for users by addressing potential risks and harms associated with digital technology.

One key focus area for the partnership is the work stream on digital safety risk assessment. This involves conducting human rights risk assessments, data protection impact assessments, and AI or algorithmic impact assessments. By comprehensively addressing these risks, the partnership aims to identify and mitigate potential harms.

The partnership advocates for the adoption of a risk assessment framework in online safety. This involves identifying risks, reducing them, mitigating harm, and repairing any damage caused. Reporting incidents is also emphasized to ensure accountability and learning from past experiences.

In addition to risk assessment, the partnership recognizes the importance of understanding risk factors and measuring their impact. This helps in developing effective strategies to address online safety issues.

The World Economic Forum’s coalition is commended for its ability to bring together experts from different fields to determine best practices for digital safety. This collaborative effort ensures a holistic approach and cross-sector knowledge sharing.

The Global Coalition for Digital Safety actively counters digital harms through media literacy initiatives. By promoting media literacy, the coalition aims to combat disinformation and educate users about safe online practices.

Involving different stakeholders, organizations, and companies is emphasized throughout the discussions. This inclusive approach promotes innovation and fosters fruitful discussions in tackling online safety challenges.

The issue of technology-facilitated abuse is highlighted, emphasizing the need for comprehensive safety measures that protect all individuals, not just those traditionally considered “vulnerable.”

Furthermore, the importance of gendered safety by design is stressed. Companies need to understand how online abuse affects women differently and incorporate measures to address these challenges. It’s also important to recognize and adjust when gendered safety measures fail.

Support for small and medium-sized companies in achieving online safety is recognized. These companies often face resource constraints, and tools such as the Digital Trust and Safety Partnership and eSafety risk assessment tool can assist them.

The discussion also emphasizes the need to incorporate online safety considerations into privacy design. Ensuring transparency about data collection and its use for safety purposes is crucial.

Finally, the challenges of creating global solutions for online safety that are locally sensitive are acknowledged. Localized and culturally sensitive approaches are crucial to address the unique challenges faced in different regions.

In conclusion, the Digital Trust and Safety Partnership, in collaboration with the World Economic Forum’s coalition and the Global Coalition for Digital Safety, aims to develop industry best practices for trust and safety in the digital realm. Through risk assessment, stakeholder involvement, gendered safety considerations, and support for small and medium-sized companies, the partnership strives to create a safer online environment for all users.

Audience

The Global Coalition for Digital Safety, initiated by the World Economic Forum, is a multi-stakeholder platform aimed at addressing harmful online content. It includes members from tech platforms, safety tech players, government regulators, civil society, international organizations, and academia. The coalition’s work is divided into four work streams: developing global principles for digital safety, digital safety design and innovation, digital safety risk framework, and media information literacy for tackling disinformation. The principles aim to address specific harms such as child exploitation, terrorism, violent extremism, and hate speech. The coalition emphasizes the importance of diverse collaboration, transparency, evidence-based solutions, and understanding the interconnectedness of online issues. It also highlights the need for comprehensive assessment of risks, tackling the challenges posed by immersive and invasive technologies, and sharing best practices. The Digital Trust and Safety Partnership, consisting of companies like Microsoft and Google, is working on industry best practices. The coalition focuses on gendered safety, involvement of non-Western corporations, and engaging diverse voices. It also recognizes the challenges faced by underserved communities and emphasizes the importance of privacy impact assessments. Leveraging data with transparency, developing scalable and culturally sensitive solutions, and involving youth and community participation are key aspects of the coalition’s work. Overall, the coalition aims to create a safe and inclusive online environment through collaboration and best practices.

Connie Man Hei Siu

During the discussion, the speakers explored the differentiation between cybersecurity and online safety, highlighting their distinct focuses and roles. The Web was mentioned as a platform that specifically centres its efforts on cybersecurity aspects.

Cybersecurity is primarily concerned with protecting infrastructure and data. It emphasises preparedness for potential cyber attacks, including safeguarding critical systems, networks, and data centres from unauthorised access, intrusion, and theft. By prioritising the security of essential infrastructure and sensitive information, cybersecurity aims to prevent and mitigate potential damage and disruptions caused by cyber threats.

On the other hand, online safety primarily revolves around ensuring safe and secure user experiences. Its objectives include addressing and combating harmful content present on the internet, such as cyberbullying, explicit or inappropriate materials, and scams. Online safety initiatives aim to create a safer online environment, especially for vulnerable groups like children and teenagers. This involves educating users about potential risks, raising awareness of safe practices, and implementing content management strategies to filter out harmful content.

The speakers agreed on the importance of establishing a clear distinction between these two areas. Differentiating between cybersecurity and online safety allows for appropriate allocation of resources and attention to address the specific challenges and objectives related to each domain. It also enables effective collaboration and coordination among policymakers, industry professionals, and users.

Throughout the discussion, it became evident that cybersecurity and online safety are interrelated but require separate strategies and approaches. While cybersecurity focuses on protecting critical infrastructure and data, online safety centres on safeguarding users’ well-being and ensuring positive digital experiences. Recognising the distinctive roles of cybersecurity and online safety is essential for the development of comprehensive strategies that enhance digital security and foster safer internet environments.

In conclusion, the discussion emphasised the importance of differentiating between cybersecurity and online safety. The Web’s attention to cybersecurity aspects, along with its focus on protecting infrastructure, data, and preparing for cyber attacks, was highlighted. Online safety, on the other hand, centres around combating harmful content and building a safer online environment for users. Recognising these distinctions enables the implementation of targeted measures and collaborations that promote both cybersecurity and online safety, ultimately contributing to a more secure and user-friendly digital landscape.

Session transcript

Audience:
of the Global Coalition for Digital Safety, and then I will hand the floor over to our co-chairs to share part of the work that we’ve been doing since we started the coalition last year. While our time is limited, we also value your input, so towards the end we will open the floor for questions, comments, and your insights regarding the upcoming challenges in online safety. I know there are too many, but at least we want to hear what you think are the main issues that we should be addressing and how we can tackle from the coalition. So, that said, the Global Coalition for Digital Safety is a multi-stakeholder platform launched by the World Economic Forum, and its goal is to promote global cooperation in addressing harmful content online, which includes both illegal content, but also legal content that is also harmful. Our members represent various sectors, including tech platforms, safety tech players, government regulators, civil society, international organizations, and academia. You can find the list of our members on our website, that I can share it later, but the work of the coalition began last year in response to growing challenges related to content moderation, child protection online, and it evolves as we see a generative AI basically everywhere, as you can see here at the forum today. To tackle these issues effectively, we have divided our work into four work streams. The first one is the work stream on global principles for digital safety, and this work stream addresses how international human rights principles apply in the digital context. Early this year, in January, these principles were published and were developed in a multi-stakeholder way, and Kearney, one of the co-chairs, together with Ian Redman from the We Protect Global Alliance, will share more details soon. Secondly, we have the work stream on digital safety design and innovation, and this group identified technology, policy, and processes and design interventions that are needed to advance digital safety. This group is led by Julie here, and also by Adam Hildreth from CRIBS, and the group has already made significant progress, including the recently launch of the typology of online harms, aiming at creating foundational language for online harms. So, thirdly, we have the digital safety risk framework, focusing on the new regulatory requirements that are currently in the scene, and this work stream has developed a risk assessment framework, and is currently working on measures to evaluate the effectiveness of digital safety systems and interventions. So, this group is led by David here, and is also co-chaired by Hill Whithead from the online safety group director of the Ofcom. So, lastly, and before I give the floor to our co-chairs to share more of the nitty-gritty of the work we have done, this year we have launched a work stream on media information literacy for tackling disinformation. This is our most recent work stream, and it aims to emphasize the importance of media information literacy for enhancing digital safety with a focus on disinformation, and Angela here is one of our co-chairs, together with Sasha Havilik, chief executive officer of the Institute of Strategic Dialogue. Well, without taking more time, I will now pass the mic to Courtney first, to provide more insight into the global principles on digital safety. Courtney?

Moderator:
Well, thank you very much, and thanks to the World Economic Forum for

Audience:
bringing us together for all of this work, to be perfectly honest. I wanted to provide just a little bit of background on the principles that was articulated, and just be clear, the question we sought to tackle was how can we meaningfully apply fundamental human rights principles to a digital world in an actionable manner for key stakeholders, government, technology companies, and civil society, and the principles actually came about after the result of some intensive discussions, expert interviews, and consultations among a diverse group of global experts, policymakers, social media, and tech platforms, safety tech companies, and civil society, and academics. You can access the full principles on the World Economic Forum website, and although it may not sound like the most interesting part, I encourage you to start at the appendix, and the over 25 resources that we cited, talking about how many times we’ve been adopting digital safety principles to address specific harms, and that was critically important, whether it was child exploitation, or terrorism, and violent extremism, or hate speech, and the purpose here was to take it up a level, and understand what are the fundamental principles behind all of that, but those resources are still important. So at the outset of the principles, we make a really fundamental point, and that is to articulate the important role civil society plays, help bridging the gulf between public and private institutions, including amplifying the voices of the underrepresented, and most vulnerable, and keeping that in mind, I just wanted to give the high-level notion across the ecosystem, meaning for all of the stakeholders, we said that supporters of the principles should collaborate with diverse stakeholders to build a safe, trusted, and inclusive online environment, enabling every person to enjoy their rights in the digital environment. Second was to seek insights and diverse perspectives from civil society to inform policymaking, understanding emerging harms, and support inclusive and informed decision-making. Third was clearly the need to support evidence-based solutions to assess, address, and advance digital safety, and prevent harm. Fourth is the critical role that transparency must play in approaches to, and outcomes to advance digital safety for a collective response, and lastly, we said that important to recognize the particular importance of helping vulnerable and marginalized groups to realize their rights in the digital world, including the importance of defending children’s safety and privacy online. These were the collective principles across the ecosystem, but following the model that has been so effective in other multi-stakeholder collaborations, we then go on to articulate what this means in practice by sector, for government, for tech companies, and for civil society, and I encourage you to take a look at that. Now we’re in that next phase of the principles, and what did we all commit to? At the core, we committed to fundamentally make decisions and take actions aligned with these principles, and a core responsibility was to raise awareness of these principles across the online ecosystem, including through active promotion, targeted outreach, and the encouragement of multi-stakeholder adoption. In the past year, this has been at the heart of oftentimes how we have conversations with global regulators. It is at the heart of how we have conversations across multi-stakeholder coalitions looking at a specific harm, and one of the most important pieces here is the goal of having all stakeholders hold others to account, that at the end of the day, we’re here for fundamental principles. We know when it comes to a safety perspective, you can have some heated debates among stakeholders, and aligning to these principles helps find the right collective response across the ecosystem. So we hope that these become a living document. They’re intended to be both content agnostic, technology agnostic, and really hold us account as technology evolves, and to be perfectly honest, the landscape of harm evolves. And they are to be that kind of guiding light that brings us back to have healthy discussions about how we shape that work. I got to give big thanks to many in this room for helping build this collective action, and if I don’t put it crisply enough, one of the most important points remained how we continued throughout the process to ensure there was enough external engagement and consultation to be truly inclusive about the development. So I look forward to further of our discussion, but I think I’m supposed to pass the baton to Julie.

Moderator:
Thank you for all that foundational work, and what I love about this project is that it is truly multi-stakeholder, but the different pieces connect really nicely together like a piece

Audience:
of the puzzle. And so I think there is general consensus that principles-based frameworks are going to be the most successful. I can say that as a regulator and as someone who spent 22 years in the tech sector, but I’m going to go back to what Albert Einstein once said, and that is you cannot solve a problem with the same thinking we used when we created them. So by the same token, we cannot hope to effectively address online harms if we don’t identify them, call them out, understand their interrelatedness, and their impacts. And so while there always be differences in interpretation and gravity, it’s really important that we’re speaking from the same lexicon. And I can tell you as a regulator how many times I’ve heard, particularly from startups or companies that aren’t quite startups, I’m thinking about Zoom. In late 2019, they had 10 million active daily users. By April, once we were fully into lockdowns around the world, they reached something like 300 million daily active users. And then the Zoom bombing started happening. And of course, it was named for them. They had to go offline. They had to fix their systems. They lost trust. As a government agency, I’m still not technically allowed to use Zoom. So how do you kind of quantify that loss of trust and what that might translate into loss for a particular online safety product? So understanding the harms and calling them out is important because if we don’t know what the problems are, then we can’t come up with effective solutions. So we all take different approaches to this. And we did some work through our safety by design principles and risk assessment tools of trying to assess online harms as we saw them played out. But we realized we had a perfect opportunity with the Digital Content Safety Working Group to really broaden and expand this. And my partner, our co-chair in this, was Crisp Thinking, a risk intelligence group based out of the UK that was just acquired by Kroll, Adam Hildreth. Brilliant thinkers. They’re doing risk intelligence. So they brought a very different perspective, but you get to a better place when you have those perspectives. So when you’re thinking about online harms, effectively everything that goes wrong in the real world and in humanity can also be playing out online. So we could have created a 300-page tome that was extremely exhaustive. We decided not to do that. And there are lots of different ways you can slice and dice and cut these issues. So a common taxonomy is around content, conduct, and contact. So we use that as one set of framing. But then what we decided to do was to actually group the harms. And we also wanted to differentiate illegal content, which is clearer, and which the companies have different sorts of systems for addressing. There are different technologies to address those issues versus the legal but harmful, what is often referred to as lawful but awful, content as well. But we wanted to find a way to bucket them. And this is where, once we broadened this out, there was a whole lot of discussion about where things belong and where they interplayed. But we ended up ultimately with six categories. Again, I think really well aligning to the principles as well as to a range of human rights that need to be balanced. And so those are threats to personal and community safety, harms to health and well-being, hate and discrimination. The fourth is violence of dignity. Things like image-based abuse, sexual extortion is a manifestation of that, that we’re all dealing with at the moment. I’m looking at Boris. Invasion of privacy, and then deception and manipulation. And you could think of that as any form of social engineering, whether it’s scamming, whether it’s grooming of children, whether it’s misinformation. So you can see that these are broad categories. What we tried to do was then list them. So these are the kind of the current typology of harms. The next step of the project, and this is going to be really interesting, is will the harms of the future simply be supercharged? You know, when we think about immersive technologies and the, you know, when you have full sensory and hyper-realistic worlds where things are happening in real time and private spaces, are things just going to be more visceral extreme? Or are we going to have different kinds of harms? I mean, there are companies right now working on haptics that will simulate the feeling of a bullet wound. There’s a whole new industry around teledildonics. Do I need to explain to you what teledildonics are? Okay. Well, any sort of connected vibrators and sexual tools and haptic suits that will help you feel sexual pleasure, which could be great for someone who is, you know, a quadriplegic or a paraplegic, but if you’re experiencing sexual assault in the metaverse, it will feel like a real sexual assault. But then when we get into beyond immersive technologies into invasive technologies, neurotechnology, nanotechnology, chips implanted in your brain, thinking about what is the last bastion of freedom of expression and freedom of thought when, you know, employers can, you know, read your mind. Already companies like Amazon and Walmart are using these types of tools that go into your ears to manage productivity on warehouse floors. Great applications for truck drivers who might be falling asleep once they start going into the REMs. It will wake them up. But what if, you know, Elon Musk sticks a chip in and it, you know, I don’t know, it fizzes out, it defaults. You know, I don’t know what you do then. Anyway, so it’ll be a creative way to think about can we anticipate what future harms, again, will it be supercharged or will there be a whole new set of harms that we’ll have to engineer out misuse around. And then finally, this is the positive part of the work that we’ll do in conjunction with the risk assessment tools is looking at best practice and safety by design interventions that work, innovations that work. All these companies are dealing with the same wicked problems that, but their platforms are different. But what can we learn from each other in terms of what is working? You know, every platform wants to have a safe, more positive, less toxic platform. We have to remember that it’s when we’ve got humans in the frame, they’ll always find creative ways to misuse that technology. So what do we learn from that? So hopefully the suite of materials will end up being quite successful and used and, you know, help us on that pathway

Moderator:
to a safer online world. Thank you, Julie. David, do you want to continue?

Audience:
Great. So I’m David Sullivan.. You may have heard of the organizations that my other co-chairs work for. You may not have heard of the Digital Trust and Safety Partnership. So we bring together technology companies with different types of products and services to develop industry best practices for trust and safety. And our members include my colleagues from Microsoft and Google, among other companies I see in the room. So it’s been my pleasure to co-chair the third work stream on digital safety risk assessment. I’ve been doing that together with my co-chair, Jill Whitehead, from the UK Ofcom regulator. And so the question that we were asked to respond to is how not just digital companies, but all of us in the sort of multi-stakeholder world can assess risks to digital safety and also try to measure the impact of interventions to address those risks. And risk assessment, you know, I think has become a very hot topic in the world of online safety. You have many different regulatory regimes that require some sort of risk assessment that are taking place. And our role in this work stream was not to develop, you know, how one should comply with online safety regulations in Australia or in the UK or in Europe or in Singapore, but instead to try to distill higher level guidance that whether you are a company or you are in government or regulator, civil society, that you can use across all of the different jurisdictions and around the world where these types of risks are manifesting. So I think in addition to the outputs, and I’ll talk a little bit more about what we have done and what we’ve planned, I want to emphasize, I think, the really valuable process that the World Economic Forum has brought to bear in this coalition. So in our work stream, we recognized that companies and others have been doing all kinds of risk assessments related to the digital space for years, whether it’s human rights risk assessments, data protection impact assessments, artificial intelligence or algorithmic impact assessments. And so we wanted to gather as much expertise as we could to learn what is already being done in order to help figure out what is the right contribution that we can make. So we gathered that input and then we spent, you know, sort of in conversations with Julie or her colleagues joining from very late in the middle of the night or sometimes very early in the morning for me in the western U.S., virtual discussions where we were able to kind of distill a risk assessment framework with a few steps to it, really about identifying risks, reducing those risks, mitigating harms that have occurred, repairing harm, and then reporting. And we coupled that framework with a bank of case studies based on existing practices. So we had a case study about the work of the Digital Trust and Safety Partnership. We had a case study about the safety by design approach from eSafety in Australia. We had a case study about AI and Microsoft, among several others. And we published those case studies in the risk assessment framework earlier this year. Moving forward, as has been mentioned, our next deliverable is going to be a report about risk factors and ultimately about metrics and measurement where, again, we are not trying to do the work of the regulators or do the work of the regulated, but I think try to bring together all of these actors and figure out what are some approaches that work, what are best practices that can be used by everyone to underpin all of these emerging, whether it’s regulatory or voluntary efforts that are happening around the world, so that we have a more informed approach to this critically important aspect of assessing and addressing risks to safety online.

Moderator:
Thank you, David. I will leave the floor to Angela now to share more about it.

Audience:
I think you heard the thank you. So what is interesting and different about the fourth work stream here is we haven’t actually really started yet. We have our first meeting coming up in December. And so what I’m happy to do here today is share a little bit of context about the media literacy to counter disinformation work stream, but also really want to solicit your ideas and inputs into we’re really scoping out how we’re going to approach this problem. So first maybe just a little context on why does this work stream exist? I think we all know if you’ve been in any of the kind of harms conversations, you’re hearing about mis and disinformation, we’re constantly talking about how do people know how to interact with the information that is online? How do they know that it is, you know, good information, authentic information from someone they can trust? Well, that’s really, really difficult in this age. And there’s a ton of different initiatives in both the public and private sectors really thinking about media literacy. But one of the things that WEF wanted to do, and we think is a really interesting contribution, is really understand the topology of those different initiatives. Which communities are these initiatives focused on? We know historically, for example, that a lot of effort has been given to media literacy and information literacy for the youth. But at the same time, all of you are also interacting online and not necessarily always having the skills that we need to be able to do that. So the first effort that we thought would be a really useful contribution was assessing the landscape and understanding a topology of the initiatives that have been around for media literacy. Which community are they targeting? Are there underserved communities, for example, consistent with the principles that Courtney was talking about? How are we thinking about vulnerable populations in this space? The second thing that we were then thinking about is making sure that everyone understands also the importance of this issue, maybe a call to action that really helps bring forward focus on this. And then finally, the last thing that my co-chair, Sasha, and I just talked about, I think a week ago, Augustina could tell me if I’m wrong, is really thinking about the spaces of intervention, almost like a kill chain. There’s a lot of focus on intervention at the individual user, at the point of interaction with harmful content. And there’s a lot of space upstream of that where we could be talking about what’s going on in the platform environment. What are we doing in safety by design? But also off platform, what’s going on in the education environment? How are we thinking about integrating media literacy into the day-to-day operations of businesses, not just the tech companies, but businesses on an ongoing basis? So those are a couple of the different things that we are thinking about as we approach media literacy to help counter mis- and disinformation. Really again, making sure there’s a call to action to understand the importance, a topology of the environment, and then really highlighting some different interventions that could exist along the user experience. So again, just a couple of thoughts on what we are considering in terms of our approach, and would welcome any thoughts or solicitations from folks who are attending. Thank you.

Moderator:
Thank you, Angela. And thank you to all my co-chairs for sharing all the work that the Global Coalition for Digital Safety has been doing. And I have to say that I joined a couple of months ago, six months ago, and I’ve been impressed by the amount of work and the level of engagement that we have from our members. So I invite you all to read the reports and stay tuned about the upcoming work on the coalition. You can, if you Google Global Coalition for Online Safety Web, you will find everything. Thank you, Google. But also, if you want to chat more about this work, you can find me at the ICF this week. And now I think that we have time for questions, comments from the audience. So we would love to hear your views as well. Yes.

Audience:
Thank you very much. I’m Ken Katayama. I’ll speak in my academic role. I have a role at Keio. Speaking from Japan, I had a conversation earlier with Julie, but the difference between digital safety, cybersecurity, freedom of speech, this is very blurry in Japan, at least as a Japanese, I feel. And Angela also knows I work very deeply in cybersecurity. Some of the stuff that you were mentioning today, we in Japan deal within the cybersecurity framework, so to speak. So how do you respond to that? I agree with what all of you are saying, but I guess as a Japanese, how do we bucket what you’re saying into what we’re doing in Japan? Thank you.

Moderator:
Maybe Courtney wants to answer that.

Connie Man Hei Siu:
Okay. Well, at Web, for example, the way that we’ve differentiated cybersecurity from online safety is that we have a center on cybersecurity that is focusing on understanding how we protect infrastructure, how we protect the data, how we prepare for cyber attacks. And then from the online safety piece, we are framing it as how we tackle harmful content online, so we are focusing on the content layer here and what are the user experience and how we can build a safer internet from the user’s perspective as well. So that would be the first point to your question. I don’t know if my colleagues want to add anything. I was going to point to Julie. I’m laughing. A little bit in the work on the typology of harms that I think is pretty critical to help. I mean, I’ll put a frame as we think about it. It truly is, if you’re thinking about the digital safety space, my definition, how we think about it from a governance at Microsoft is a online content and conduct intersecting with personal harms. That is distinct from platform harms.

Audience:
That is distinct from economic harms. But I think the typology work that the working group really hits home here. The interesting space that starts blending, as we all know, is what may have been in a cybercrime space in a fraud or manipulation. Now when those are targeting at a more vulnerable populations, you start seeing an intersection here. I think we need to do it from a typology of harms because then the interventions, although some may be similar from a cybercrime perspective, they are very different as well. And you want them to be different because you want to be enabling productive, positive technology intervention, not halting interaction with technology. In a criminal context, that is probably the best outcome. So the intervention methodology also differs. In full disclosure, we were just saying all roads lead to Microsoft. I spent 17 years there. Ken and I know each other there. Of course, Courtney is there and Angela. And I were in Trustworthy Computing together. And more than 10 years ago, I was asked to develop the first trust and safety strategy and business plan. And then it was sent back to me because I had to explain to our executives, who were deep cybersecurity experts or privacy experts, well, what do you mean by trust and safety? And what we eventually landed at is any kind of personal harm that results with technology facilitating some form of interaction. It’s when people are interacting online, whether it’s via chat, social media, DM. It’s covered here, dating, gaming platforms, and all those things. And we’re often asked because we’re having debates. And Maria mentioned this in our session with Jacinda Ardern and Maria Reza. I think of security, privacy, and safety as three legs of the stool. And if one of them is out of balance, then the stool falls over. But right now, we kind of have a bit of a false binary on you have to have, if you have absolute privacy, there’s absolute privacy or absolute safety where there’s probably the truth that there has to be a balance that exists. And when I think about cybersecurity, I think of the systems, processes, and technologies. Again, safety regulation is tending to move more from just content regulation to more systems and processes. We’ve got powers to do both. And I would actually argue that while I do think systems and processes over time will hopefully help to lift online safety standards broadly, one of the most important things we do is remediate harm in real time. And I’m not sure that that will happen purely through systems and processes. The way that targeted online harassment now happens is by targeting individuals who may be part of vulnerable groups, but that has an impact on democracy and society, individuals. So I think we still need a combination of both, because if we’re not bolstering the humans and helping the humans and serving as that power balancer, then we’re going to have a lot of damaged individuals walking around. Thank you.

Moderator:
There’s another question there. Please go ahead.

Audience:
Hi. My name is Chang Ho. I’m a lawyer coming from Japan. And just my question is, is there any kind of non-Western companies, like corporations involved in the engagement of the stakeholder process? And do you have any plans to make it known for, let’s say, Asian companies in East Asia, for example, here in Japan or South Korea? There are many other non-Western platforms, like Line or the Kakao and so on. I mean, just wants to know if you have any plan. I mean, WEF has an office in Tokyo, for example. I mean, not sure. I mean, maybe you can just build some partnership with the local office here. So, yeah. Thank you.

Moderator:
Thank you. Well, one of the reasons of being here at the IGF is to talk to different companies, different stakeholders, because we want to continue bringing different perspectives into the coalition work. So any ideas of organizations we should be involving that are relevant for the discussion that we are having, please reach out, and we will be happy to have a conversation. Yeah, I would just add to that quickly, just that, yeah, I think there is concerted outreach right now, that both for this coalition, as well as for our own digital trust and safety partnership, involving companies from other parts of the world is critically important. And so that’s something that, yeah, I’m here to talk. We have a booth in the exhibition hall with as many folks as possible. And I know that companies like Line and Kakao have really been leaders in different areas of this space and have worked with Line and other coalitions. And so I think it’s really, the time is now for companies like that to get involved.

Audience:
Hi, I’m Sherry Cram Talabani. I am a human rights lawyer, and my organization, Seed Foundation, is a protection actor in Kurdistan, Iraq. Their online violence results in honor killings, sexual exploitation, abduction, financial exploitation, human trafficking. So my question was similar. How do you intend to engage the developing countries to ensure that the risks that individuals face in these environments, with high levels of gender inequality, with high levels of violence generally, how do you intend to engage those communities? I think it’s relevant to the typology of harm, which I think in our part of the world across the Middle East is quite different than maybe other places. So that’s my question. Thank you. I can start that. Well, I’m myself, I’m from the Global South. So since I joined the initiative, I’ve been reaching out to organizations in Latin America and Africa to try to get them involved in the work that we are doing, because we believe that, as I said before, the more voices we have, the better prepared we can be for tackling the harms that you just mentioned. So at the coalition level, we are doing a lot of work in terms of outreach, connecting with organizations, not only private companies, but also civil society, governments. And we are open for conversations with any organization you think should be relevant. So happy to chat about your work as well after this. But again, it’s very important that we bring as many voices as we can to this conversation. And as I said before, the coalition is very solid in terms of the level of engagement and the commitment that we have. So the foundations are really strong, and bringing in more voices will make it even more stronger. I might just add, again, Workstream 4 is just scoping out how we’re going to move forward. But my co-chair, Sasha and I, and Augustina will remember this, we talked specifically about how we wanted to make sure in that topology of understanding what’s out in the environment, making sure that we were reaching out to countries of different status, right? Whether it’s the global north or the majority world, really making sure that we were having a broad understanding of that environment. And then we started to talk a little bit, again, this is anticipating maybe some of what could happen in the future. Maybe there might be things where there are different model countries to be able to help move up and do some capacity building and media information literacy. Again, not promising that that’s the direction that we’re going, but one of the things that we were thinking about in scoping out our work is making sure that it is representing a diversity of the community. Just to, I think I’m expanding on that comment for a second. If I go back to one of the fundamental principles, you’ve got representatives here in co-chairs, so I don’t mean to speak for everybody, but that are investing additional resources and time outside of their day jobs because we are from some of the larger organizations who can handle global trust and safety teams. The goal of this work then was to build a scalable model for small, mid-sized companies to understand the landscape and resources that would help them scale their trust and safety capabilities. Now, even as a global company, what you just articulated to me is one of those gaps that we genuinely know would be so valuable to scale knowledge and capacity building across the trust and safety. What are the kind of, I’ll call it regional threat assessment landscapes that would be meaningful for us to think about? That might be a space that can be the growth projector, but I am acknowledging what we are hoping to do is really bring those resources to bear for all tech companies and platforms to understand the risk and the tools that they need. Hi, Hélène Molinier from UN Women. I would be very interested to hear how do you bring a gender lens in everything that you do because I think we’ve heard a lot about safety in neutral terms during your presentation and just want to bring the attention that when we talk about violence online and all the crimes I think that you just brought up, these are crimes that are in majority affecting women. So, I think it’s important when we look at safety and that we don’t talk about women as a vulnerable group because women are half of the population. They are not a vulnerable group. It’s really something that should be front and center of everything that we do when we design safety and I look at our principles. I look at the typology. I think it’s great, but I think it would be even greater if we could have a chapeau, especially if you’re building a model that helps build the capacity of smaller firms or people that don’t have as much as information of view of this typology, that we really make that something front and center in the work that we do. Thank you.

Moderator:
Thank you for that comment. The UNFPA put on a really important meeting on Saturday that had me get up at 4 o’clock in the morning and write a 20-page paper where I was trying to draw the strings together of what is technology-facilitated abuse, how does it manifest differently against women. Often we’re thinking about targeted online abuse, but one of the programs that we deal with are women who are in family and domestic violence situations in 99% of cases, technology-facilitated abuse through harassing texts, through microaggressions in banking transactions, through drones over safe houses and other manipulations of technology is used to further isolate, stalk, harass, intimidate their former partners, a form of coercive control. And then I started thinking about gendered safety by design, and I was really trying to rack my brain and think about is there an example that I can come up with of a company that designed, understood how online abuse manifests differently against women, you know, rape threats, death threats, about appearance, fertility, supposed virtue. I can’t actually think of one. I can only think of fails. Like the air tag, I remember when that came out in April 2021, I was like, oh, that’s great for me because I always lose my keys in the bottom of my bag. But gosh, this could be such a potent tool for those who want to surveil their former partners, and we, you know, we wrote to the company and just said what kind of safety by design assessments did you use? Well, we looked at privacy by design, and I’m like, well, two years later, and kudos to Google and Apple, but they’ve now just announced that you will get notifications when there’s a Bluetooth tracker or AirPods that are detected on your device that might be following you. But in that intervening period, we had so many cases where women experiencing technology facilitated abuse had eventually, they couldn’t find out why their partners knew where they

Audience:
were. One of them, we found the air tag was in the wheel of their car wheel bank. But these are small devices that can follow people anywhere. So I would love to think about what gendered safety by design looks like and work with one of these great companies here to see if we can, you know, find a use case and apply it.

Moderator:
Thank you, Julie. Any further questions or comments? There is one there. If you can stand up and come to the mic here. Thank you.

Audience:
Thank you for the panel. That’s been very impressive. My name is Kohei. I’m working on privacy by design. So I’m very interested in your session due to some of the risk assessment purpose. I talked with some of the smaller, medium-sized companies so far in terms of the privacy impact assessment. But for them, it’s very challenging because it’s a lack of resources, lack of the budget or many, many kind of the burdens for them to achieve this goal. So you mentioned some of the great initiative to make an assessment, right? But in this case, how to support any kind of the, any sectors who is in lack of the resources, who is in the help, even they want to protect the safety for all the citizens. Do you have any advice for them? So I think the, it’s a terrific question. And I do think that one of the highlights of this coalition is that its outputs are very digestible, sort of scalable entry points into more detailed and resources that can sort of build a ladder for companies that are wherever they are in their sort of journey towards maturity when it comes to online safety. And so starting with the coalition outputs and things like the appendix to the principles that Courtney mentioned, the taxonomy, the case studies that we’ve put out on risk assessment. From there, there are a number of things that I would also point to that I think can be valuable. Our framework at the Digital Trust and Safety Partnership, it really is scalable in that the practices we’ve set out for trust and safety are things that can be used by companies at different stages and where the level of intensity of assessment of a company’s practices depends on the size and scale of the company, but also on the risks that it faces. And I think we need to look at both of those things, not just look at the biggest companies in the world because small companies can sometimes have products that have very specific outsized risks. I would also highlight some resources that Julie’s team, the Safety by Design Assessments, have an assessment for startups, I believe. Yeah, I’d just say that these are free. Just look up eSafety, Safety by Design, Risk Assessment Tools. We have one for startups. We’ve just developed a free MOOC with the Royal Melbourne Institute of Technology, 12 hours of coursework in your spare time. But just to walk through the principles. I’m just gonna pick up on one thread you said, which is that you focus on privacy by design. So my encouragement would be all the resources we just referenced are critical to be built in when people are thinking about privacy by design. It’s time at the very beginning of privacy by design when you were thinking about what data you’re gonna collect and how you’re gonna utilize that data to be completely transparent that safety is a legitimate means and need to be leveraging the data for that purpose. And so these resources help you understand the harms in the world that will happen because of technology facilitated and that must be stated as to why data is leveraged for these safety purposes. The speaking from someone who has helped, I’ll call it retrofit that, for many of our privacy experts across a specific company, that is at its core helping people understand, yes, to be perfectly honest, there is some minimized data collection that needs to happen to help advance your safety online when interacting with these tech platforms. And we are gonna be transparent as heck about that, but that’s important to be building in by privacy by design. Thank you. One final question, right? Yeah, please go ahead. Of course I have to be the final question. Hello everybody. First of all, I just wanna say it’s beautiful to see the regulator next to the regulator. I hope there’s, with one in between, yeah. It’s beautiful. Thank you. Boris from SWGFL and full disclosure, David Wright, the CEO of SWGFL is also the lovely member. And it might be a million dollar question. Global cooperation and multi-stakeholder approach, I think it’s almost evident, is a way forward. But the question is on the scalability of issues which Julie mentioned on the harms of the future being supercharged. And with your lessons and learnings from developing global tools that have to have local sensitive solution. My question is how do we ensure that the solutions scale with the size and the impact of the issues and the challenges that we see, especially as we are seeing global issues that require local sensitive solutions. And most often those solutions are not developed from the global South, but the global developed West. And those solutions then have a completely different impact. Thank you very much.

Moderator:
Very interesting question.

Audience:
And you deserve the million dollar question. I am an optimist about this inflection moment of generative AI to build more linguistic and cultural sensitivity to how we do this. If we do this right from the outset, we should have learned the lessons of the last 15 years. And it’s time to take those lessons into practice as we look at data training sets for generative AI at the outset. That is how to build in to your point. We know safety has localized, it needs a local lens. We need to understand the linguistic and cultural implications, but guess what? The promise, the capability of generative AI can turn that at scale if we build it from the foundational building blocks upright. I wanna believe that we have learned those lessons. We’re gonna have to put them in practice, but there is some already very, very promising concepts about how you start being much more nuanced and appropriate in your intervention methodologies from a safety perspective when you turn generative AI for the promise for good. So hold us all throughout the ecosystem account to do just that, but then we really need some innovators to think about how to build that scalable across the ecosystem from a safety lens. I guess I’m using Australia as a microcosm for the global world, but we found that the only way that you can develop authentic tools and prevention methods, and this probably goes to a technology, is through co-design. So we have a youth advisory committee that helps us develop our scroll campaign, which is for young people. It’s language and concerns developed by young people for young people. We did the same with our Learning Lounge LGBTQI plus materials, all of the consultation, all of the vendors that we use come from the community, and it was very courageous for a government agency to use eggplant emojis, I have to say, and we even used some peaches. But anyway, co-design is the way to go where you can scale that. Final word, David? I would just say, I do think that meaningful involvement, I’m not saying anything anyone hasn’t already said, that we need to all be in this together and hash it out, and I think that that is how we figure out something that is much more deliberate and better from the outset. Thank you, David.

Moderator:
Well, I think we have time to wrap up the session. Thank you very much for joining us today. You can find the reports that were just presented on the Global Coalition for Digital Safety website, and I know we could spend more and more hours discussing these issues, but we still have three days more at the ICF, so feel free to reach out and connect with any of us, with me, if you want to know more about the work of the coalition, or if you want to get engaged, or if you have ideas of organizations we should be engaging, please reach out. Well, see you around, and thank you very much again. Thank you so much. Thank you.

Audience

Speech speed

173 words per minute

Speech length

7105 words

Speech time

2467 secs

Connie Man Hei Siu

Speech speed

184 words per minute

Speech length

205 words

Speech time

67 secs

Moderator

Speech speed

166 words per minute

Speech length

976 words

Speech time

352 secs