Launch of Fellowship for Refugees on Border Surveillance | IGF 2023

9 Oct 2023 00:30h - 01:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

This comprehensive analysis covers a wide range of topics related to education, generative AI, risk management, information literacy, multi-stakeholder engagement, the actions of the European private sector in oppressive regimes, the impact of misinformation and disinformation, and the coexistence of privacy and safety in technology design.

One of the discussions revolves around educating people about generative AI and the need to mitigate its risks. The audience seeks advice on how to educate individuals about this technology, indicating recognition of its potential risks. However, the sentiment is neutral, suggesting a need for more information and guidance in this area.

Another argument highlights the importance of promoting critical thinking and curiosity among children in the face of the age of disinformation and rapid technological change. The supporting facts include a quote from Jacinda Ardern, who emphasises the shift from relying on facts obtained from traditional library resources to the current digital age with multifaceted sources. She urges individuals to seek knowledge about the process and origin of the information presented. This positive argument underscores the need to equip children with the necessary skills to navigate and critically evaluate information in the digital era.

The analysis also addresses the need for a multi-stakeholder approach to problem-solving and the challenges faced by civil society, particularly from the Global South, in effectively participating in solution-finding dialogues. These challenges include disparities in accessibility and effectiveness compared to governments and corporate organisations. This observation points towards the importance of inclusivity and equal representation in decision-making processes.

Another notable point relates to monitoring the actions of the European private sector, particularly within countries with oppressive regimes. The argument raises questions about how to effectively monitor the activities of companies operating in these contexts, such as China, Vietnam, and Myanmar. This highlights concerns about the impact of the private sector on human rights and the need for oversight and accountability.

The analysis also delves into the impact of misinformation and disinformation, noting that individuals who distrust institutions are more susceptible to these phenomena. This observation emphasises the importance of building trust in structures and institutions to combat the spread of false information.

Furthermore, the debate on designing technology that balances privacy and safety in the online world is also addressed. The argument suggests that current technology and design choices might limit the coexistence of privacy and safety, forcing the prioritisation of one over the other. This highlights the ongoing challenge of developing technology that can effectively address both concerns.

In conclusion, this analysis highlights the need to educate about generative AI, mitigate its risks, foster critical thinking and curiosity among children, ensure inclusivity in problem-solving dialogues, monitor the actions of the European private sector, build trust in institutions to combat misinformation, and address the challenge of designing technology that balances privacy and safety. These observations reflect the complexity and interdisciplinary nature of the issues discussed, as well as the importance of considering diverse perspectives to inform effective strategies and solutions.

Karoline Edtstadler

During the analysis, several key points were discussed regarding the views expressed by Karoline Edtstadler. Firstly, she emphasised the need for greater recognition and opportunities for ambitious women. Edtstadler observed that women who strive for success are often viewed negatively, being labelled as pushy or attempting to replace men. She believes that society should overcome this perception and provide more support and encouragement to women with ambitious goals.

Secondly, Edtstadler underscored the value of women’s unique perspectives in leadership roles. She argued that women’s ability to perceive life from their point of view – particularly as those capable of giving birth and responsible for nurturing and upbringing – makes them special. The shared yet different life experiences, such as motherhood, contribute to their valuable insights and decision-making capabilities.

In terms of AI regulation, the European Union’s efforts were commended. The EU is taking the lead in regulating AI and prioritising the classification of risks associated with AI applications. This focus on risk evaluation aims to strike a balance between promoting beneficial AI technologies and addressing potential societal impacts.

Austria was recognised for its proactive approach to digital market regulation. Even before the implementation of the EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA), Austria had already established the Communications Platform Act, effective from 1st January 2021. Under this act, social media platforms are obliged to promptly address online hate speech. Austria’s early actions demonstrate the country’s commitment to creating legal frameworks concerning digital services.

Collaboration and multi-stakeholder involvement were identified as crucial factors in addressing the challenges posed by AI, digital markets, and misinformation. Edtstadler advocated for a concerted effort involving governments, parliamentarians, civil society, and tech enterprises. She emphasised the importance of collective efforts and shared understanding in tackling these complex issues.

The analysis also highlighted the importance of education and awareness in effectively handling the impacts of social media and new technologies like AI. This includes equipping the public with knowledge and skills to navigate technology, particularly among the elderly. Additionally, it was emphasised that regulations should strike a balance between ensuring safety and privacy while still fostering innovation.

Restoring trust in institutions, governments, and democracy was identified as a crucial objective. Given the rise of misinformation and disinformation during events like the Covid-19 pandemic, Europe aims to counter these challenges through robust regulations. By addressing the issue of misinformation, trust can be rebuilt among citizens.

It was also noted that technology, including AI, should not replace human decision-making, particularly in matters like judgment in law enforcement. While AI can offer efficiency in finding judgments and organising knowledge, drawing a clear line between human judgment and AI is important.

Handling the downsides of technology was deemed necessary to ensure its benefits for society. Technologies like AI can be used for good, such as performing precise surgeries and speeding up tasks in law firms. However, challenges and risks should be addressed to make technology beneficial for all.

The analysis further underlined the importance of a multi-faceted approach in decision-making processes. Edtstadler highlighted Austria’s implementation of the Sustainable Development Goals (SDGs), wherein civil society was invited to contribute and share their actions in dialogue forums. This multi-stakeholder approach promotes inclusivity and diversity of perspectives in decision-making.

In conclusion, the analysis emphasised the need for recognition and empowerment of ambitious women, effective regulation of AI and digital markets, collaboration among stakeholders, education and awareness, addressing challenges in democracy and technology, and restoring trust in institutions and governments. These key points and insights offer valuable perspectives for policymakers and individuals seeking to promote a fair and inclusive society in the face of technological advancements.

Jacinda Ardern

The Christchurch Call to Action is a global initiative aimed at tackling extremist content online. It was established in response to a terrorist attack in New Zealand that was live-streamed on Facebook. Supported by over 150 member organizations, including governments, civil societies, and tech platforms, the Call sets out objectives such as creating a crisis response model and better understanding the process of radicalization.

New Zealand Prime Minister Jacinda Ardern believes that it is crucial to understand the role of content curation in driving radicalization. She highlights the case of the terrorist involved in the Christchurch attack, who acknowledged being radicalized by YouTube. Ardern calls for an improved understanding of how curated content can influence behavior online.

Ardern advocates for a multi-stakeholder solution to address the presence of extremist content online. She emphasizes the need for collaboration between governments, civil society, and tech platforms, recognizing that it requires a collective effort to effectively eliminate such content. The Call focuses not only on existing forms of online terror tools but also aims to adapt to future forms used by extremists. It proposes measures such as implementing a strong crisis response model and working towards a deeper understanding of radicalization pathways.

Privacy-enhancing tools play a crucial role in preventing radicalization. These tools enable researchers to access necessary data to understand the pathways towards radicalization. By studying successful off-ramps, these tools can contribute to preventing further instances of online radicalization.

One of the challenges in understanding the role of algorithms in radicalization is the issue of privacy and intellectual property. It is difficult to obtain insight into how algorithms may drive certain behaviors due to privacy concerns and proprietary rights. Despite these challenges, gaining a deeper understanding of how algorithms contribute to radicalization is essential.

Artificial intelligence (AI) presents both opportunities and risks in addressing online extremism. AI can assist in areas where there have been previous struggles, such as content moderation on social media. However, caution exists among the public due to potential harm and risks associated with AI. Ardern argues that guardrails need to be established before AI can cause harm, and the development of these guardrails should involve multiple stakeholders, including companies, governments, and civil society.

The involvement of civil society is crucial in discussions around AI in law enforcement to protect privacy and human rights. Ardern believes that civil society, alongside the government, can act as a pressure point in addressing questions regarding privacy and human rights in the context of AI deployment.

Education plays a vital role in addressing online extremism. Teaching critical thinking skills to children is essential to equip them with the ability to think critically and evaluate information. Adapting to rapid technological changes is also necessary, as the accessibility of information has significantly evolved from previous generations, leading to challenges such as disinformation and the need for digital literacy.

The inclusion of civil society and continuous improvement are important aspects of addressing challenges. The creation of a network that includes civil society may face practical obstacles, but ongoing efforts are being made to involve civil society in initiatives such as the Christchurch Call. Ardern acknowledges that learning and improvement are continuous processes, emphasizing the importance of making engagement meaningful and easy.

Overcoming the debate around privacy and safety on social media is a critical step in addressing extremist content online. Efforts to access previously private information through tools created by the Christchurch Call Initiative are underway, allowing researchers to study this information in real-time. The findings of the research will inform further action, involving social media companies in addressing the identified issues.

Disinformation is a significant challenge, and Ardern highlights factors that make individuals susceptible to it, such as distrust in institutions, disenfranchisement, lower socioeconomic status, and lesser education. Preventing individuals from falling for false information is crucial, and rebuilding trust in institutions is necessary to address the impact of disinformation.

Supporting regulators focusing on technological developments is crucial in managing the challenges presented by technological advancements. Ardern acknowledges the poly-crisis resulting from these developments and emphasizes the need to support regulatory efforts.

Ardern expresses optimism in the ability of humans to adapt and design solutions for crises. She has witnessed humans successfully designing solutions and rapidly adapting to protect humanity, giving hope for addressing the challenges posed by technological developments.

Information integrity issues, such as the lack of a shared reality around climate change, impact serious problems. Ardern emphasizes the need to address these issues to effectively tackle challenges like climate change.

In conclusion, the detailed analysis highlights the importance of the Christchurch Call to Action in addressing extremist content online. The Call emphasizes the need for a multi-stakeholder approach involving governments, civil society, and tech platforms. Privacy-enhancing tools and understanding the role of algorithms are crucial in preventing radicalization. Guardrails need to be established for AI before it can cause harm, with civil society involvement to protect privacy and human rights. Education plays a vital role in teaching critical thinking skills and adapting to technological changes. The involvement of civil society, continuous improvement, and overcoming the debate around privacy and safety on social media are essential steps in addressing extremist content. The management of disinformation, support for regulators, and human adaptability in designing solutions for crises are also key considerations.

Maria Ressa

The analysis of the given information reveals several important points made by the speakers. Firstly, it highlights the significant online harassment faced by women journalists, which hampers their ability to participate in public discourse. It is reported that women journalists covering misogynistic leaders often face considerable online harassment and are frequently told to ‘buckle up’ by their editors. This indicates a systemic problem that needs to be addressed.

The role of technology in facilitating hate speech and the dissemination of harmful content is also underscored. The Christchurch terrorist attack, for instance, was live-streamed, demonstrating the misuse of technology for spreading violent and harmful content. This highlights the need to address the role of technology in inciting hate and enabling the circulation of such harmful material.

Efforts to address these challenges require more than just asking news organisations to remove harmful content. The analysis suggests that a multi-stakeholder effort is necessary. Following the Christchurch attack, Jacinda Ardern led a successful multi-stakeholder initiative known as the Christchurch Initiative, which aimed to eliminate extremist content online. This approach emphasises the need for collaboration and coordination among various stakeholders to effectively combat online attacks and extremist content.

The analysis also highlights the importance of strong government action in addressing this issue. The New Zealand government, for instance, took robust measures to eliminate the influence of the Christchurch attacker by removing his name and the footage of the attack from the media. However, it is crucial that government action remains inclusive and does not suppress free speech.

Furthermore, the analysis points out that valuable lessons can be learned from the Christchurch approach in combating radicalisation. The approach was developed in response to a horrific domestic terror attack that was live-streamed on Facebook. It aims to understand how people become radicalised, with a focus on the role of curated content and algorithmic outcomes online.

The impact of social media behaviour modification systems and the current focus on content moderation is a source of concern. Data from the Philippines has been analysed, indicating that lies spread faster on social media than factual information. The analysis argues that current solutions, which mainly focus on content moderation, are not effective in addressing the problem. Instead, a shift towards addressing structural issues, such as platform design, is recommended.

Furthermore, the potential harms of generative AI should be prevented rather than merely reacted to. Concerns over the impact of generative AI are mentioned, and the need for proactive measures to address the harm caused by AI is emphasised.

Civil society collaboration and the corruption of the information ecosystem are seen as crucial problems. The analysis suggests that civil society needs to come together more to address these challenges effectively.

The weaknesses of institutions in the global south, as well as countries experiencing regression of democracy, contribute to the challenges. Authoritarian leaders are leveraging technology to retain and gain more power, which further exacerbates the issue.

Interestingly, the analysis highlights that even intelligent individuals can fall victim to misinformation and behaviour modification in information warfare or operations. This emphasises the need for education and awareness to combat these challenges effectively.

The integration of privacy and trust into tech design is seen as possible; however, it often lacks regulation and pressure from civil society.

Lastly, the analysis suggests that we are in a pivotal moment for internet governance. Maria Ressa, one of the speakers, expresses a more pessimistic viewpoint on the situation, while others remain optimistic. The importance of effective internet governance is underscored, as it directly impacts various areas, including peace, justice, and strong institutions.

In conclusion, the analysis highlights the challenges faced by women journalists in public discourse, the negative impact of technology in facilitating hate speech and harmful content, the need for multi-stakeholder approaches, the importance of strong government action, and the lessons from the Christchurch approach. It also emphasises the concerns regarding social media behaviour modification systems and the current focus on content moderation. Structural issues in platform design, prevention of harm from generative AI, civil society collaboration, corruption of the information ecosystem, weaknesses of institutions, susceptibility to misinformation, and the incorporation of privacy and trust into tech design are other noteworthy points raised. Overall, the analysis underscores the significance of effective internet governance in addressing these complex issues.

Session transcript

Karoline Edtstadler:
It’s really a big honor for me to sit on the same panel, even if you’re not here, Jacinda, with you. You are really also a role model for women, and it’s a pleasure that I have the impression I’m getting also a role model by hearing what you said about me. So I would say you can break it down with a joke, which is of course only a joke, but this goes the following. The last 2,000 years, the world has been ruled by men. The next 2,000 years, the world will be ruled by women. It can only get better. But this is not the end of the story, because we are living in a very diverse world. We are living in a challenging world, and I think we need both, the approach of women and of men. But the difference is, and Jacinda already mentioned, being ambitious is something very important, that we women are judged and seen in a different way. If you are ambitious as a woman, you’re the pushy one. You’re the one you want to get the position of a man, and so on and so forth. And I think what we as a society have to learn is that we need both ways of seeing the world. And we women can make a difference, because we are giving birth. We are mothers. We are really perceiving the life. And I think this is also why we are different than men. And that’s good. There’s nothing bad in it. And especially in times like that, you mentioned a few of the crises we are still going through. It’s very important to have both ways of seeing the world, both assessments of female and male. And one last thing, I think women are still not that good than men in making better networks, in holding together, in encouraging ourselves. And that’s why I founded a conference last year in August in Salzburg, which is called The Next Generation is Female. And it’s not about things against men. It’s with the support of strong men. And it’s really for female leaders in Europe to get together, to network, to exchange their selves, and to have personal chains also and encourage ourselves, because it’s not easy and we will go into details also regarding hatred in the internet and being judged as a woman.

Maria Ressa:
And that’s where we’ll go. For the men, I hope you find this as inclusive. Part of the reason I started this way is because the attacks against women online are literally off the scale. When I talk to reporters who are, in some instances, covering male leaders who are misogynist, their editors tell them, you know, buckle up. It’s not our problem. But I think one of the things that we want to lay out is that it is a problem of the technology, it is an incitement of the technology, and it is knocking women’s voices out of the public debate. Let me bring it back to what exactly we’re talking about, the technology that is shaping our world today. And one of the most interesting things Jacinda Ardern did was a very strong reaction to the live streaming of a terrorist attack. It was the first time that a government literally took, asked all news organizations around the world to take out the name of the attacker. So this was, I was surprised when we got this. But when we thought about it, I was like, oh, well, that kind of makes sense. But also to try to deal with taking down this footage from all around the world. Jacinda, you’ve pointed to the Christchurch Initiative as a multi-stakeholder solution for eliminating terrorist and extremist content online. What did it succeed in doing, and where can you see that moving forward, given the landscape we’re working in today?

Jacinda Ardern:
Thank you. A really big question, but I hope that there are some useful lessons to be learned. Where we’ve succeeded, where we have more work to do. So I assume that a number of people in the room will have a bit of prior knowledge about the Christchurch Call to Action, which is over 150 now strong with member organizations made up of, and supporters made up of the likes of government, civil society, and technology membership and platforms. But taking a step back, why did we create this grouping in the first place? Well, as you say, on the 15th of March in 2019, we experienced a horrific domestic terror attack against our Muslim community. It was live streamed on Facebook for a total of 17 minutes, and then subsequently uploaded a number of times over the following days. It was just prolific. People were encountering it without seeking it. And you’re right to acknowledge that in some cases, it was in people’s feeds. Because it was being reposted by news outlets or referenced by news outlets. Now in the aftermath of that, of course, New Zealanders had a very strong reaction. This should never have been able to happen. But now that it’s happened to us, what can we do to try and prevent it happening again? And we took an approach that was not just about how do we address the fact that live streaming itself became a platform for this horrific attack? Because if we just focused on that, that’s a relatively narrow brief. And we know that the tools that are used for violent extremism or by a violent extremist or terrorist online, they’re going to change. Live streaming was a tool at that time. The response was ill-coordinated by other tech platforms for a number of reasons. So work needed to be done there, yes, but we also wanted to make sure that we were ready and fit for purpose should other new forms of technology be the order of the day for those violent extremists. So the Christchurch Call to Action has a number of objectives. Some of them are things like creating a crisis response model so that we are able to stand up quickly should anything like this occur again. And we have not seen at the scale and magnitude of Christchurch online since then. And that’s in part because we now have this almost civil defense model. But we also said, how does someone become radicalized in the first place, acknowledging that in our case, the terrorists involved acknowledge themselves that they believe themselves for being radicalized by YouTube. Now, you know, people will debate whether or not they believe that to be the case. But regardless, there were questions there to be asked around what we can do as governments within our own societies, but also to better understand these pathways. You know, what is curated? How is curated content and algorithmic outcomes driving particular behavior online? So we’ve got a large piece of work now looking at understanding that better. And these, I think, are areas where our learnings will be hugely beneficial much more broadly.

Maria Ressa:
That’s fantastic. Let me follow up with that, which is, you know, last week or I guess a week and a half or so ago. I taught a class with Hillary Clinton and the Dean of SIPA, Karen Yari Milo, where we looked at the radicalism that comes with the virulent ideology of terrorism, right? How that radicalizes people. But one of the things we did in the class was to show how similar it is with what we are going through now on a larger scale with political extremism. Are there any lessons from the Christchurch approach and the pillars that you’ve created, how to deal with radicalization, for example, that we can learn to combat the polarization we’re dealing with globally?

Jacinda Ardern:
Good question. And where I come at it from is our starting point was, how did this individual become so radicalized that they were driven to fly to our country, embed themselves in a community, and then plan an attack against our Muslim community and take 51 lives? How is it that that can happen and what can we do to prevent it? And now the learnings from that may be applicable across a range of different areas and a range of different sources of motivation and rationales, whatever they may happen to be presented by the individual. One common denominator that we determined was that, despite the ideology that might be driving the behavior, was that we couldn’t actually answer some of these questions because so often there would be this issue around, well, privacy, intellectual property. It was very hard to get an insight into how, for instance, algorithms might be driving some of this behavior. If indeed it is. And so we took a step back and over time pulled together a group of individuals, as in governments and platforms, who were willing to put funding into creating a privacy-enhancing tool, which will enable researchers to look at the data that we need to look at in order to understand these pathways, and that will enable researchers across a range of fields to better understand that user journey and that curated content, help understand what successful off-ramps look like, and I hope further prevent this kind of radicalization online.

Maria Ressa:
No, that’s a perfect example. And Caroline, you were in the EU, the EU has been ahead, and data being one of the key factors for how we’re able to see the patterns and trends that influence behavior. Could you tell us about the EU’s approach to its democracy action plan, and then now rolling out the Digital Services Act and the Digital Markets Act?

Karoline Edtstadler:
Well, I think at times like this we should do everything in parallel, and there are so many crises and so many challenges we should find an answer for that it is really quite hard to do so. But I really think that the European Union is regarding the AI Act ahead, and if I’m saying ahead, I mean we are of course lacking back, because we should have been quicker. But the developments were so quick in the last two years, I would say, that it is no more like that. So now we are really trying to do something regarding the AI to have a framework for AI to have a classification of the risks of AI, and I think that is something very important. To classify the risks, because there are some applications, they do not harm us. We need them, I don’t know, for some spam filters, it’s not doing a risk, but on the other hand we have AI which is really harming the whole of our society. And this is the one thing. The other thing is that we already have the DSA and the DMA in the European Union, and I can proudly say that we in Austria were pushing that a lot, and we already started a process in 2020 to have a legal framework in Austria. And it was, I would say, now I put it diplomatically, I had a lot of discussions also with the European level, because they were not happy that we wanted to have an Austrian legal framework for that. But they knew that it will last for at least two years to create it in the European Union, and we were really quick in Austria, we had the Communications Platform Act set into place from the 1st of January 2021, and this is something where the social media platforms have to deal with that issue. They have to do reports, they have to set up, they had to set up a system where someone who has hatred in the internet can push a button and say, this is against me, do something, delete it now, because it’s going around the world very quickly, and you as a victim should be helped in the minute it comes across. So now we have the DSA and the DMA, and of course we have to reveal our legislation, but this was also my goal, to have first the national level, then the European, and now I’m here as a member of the leadership panel, and really try to create something for the universe. So this is for the whole international community, and this is something which is not easy, because of course different governments coming from different standpoints have different assessments of the situation, but in general it’s about human beings treating and have the need to treat this big thing of danger also for our whole society, as Jacinda also said, and as we saw in her country with this really horrifying attack, terrorist attack.

Maria Ressa:
No, that’s from the data from the Philippines that we’ve looked at and analysed in the Nobel Lecture in 2021, I called social media, the tech companies, behaviour modification systems, and I will tweet the data that shows that, as well as the impact we saw in our society. So let me ask our two leaders, you know, for social media, the first time that machine learning and artificial intelligence was really allowed to insidiously manipulate humanity at scale, you’re talking about at that point maybe 3.2 billion, right, deployed at scale across platforms, because it doesn’t just stay in one, there was a lot of public debate and a lot of lobbying money that was focused around downstream solutions, right, the way I think about it is, you know, there’s a factory of lies, I mean, you would have seen this already that is spewing lies into our information ecosystem, the river, and what we tend to focus on in the public is content moderation. Content moderation is like taking a glass of water from the river, cleaning it up, and then dumping it back into the river. So, you know, how can we move away from these downstream solutions like content moderation more into structural problems like design? The fact that MIT in 2018 said lies spread six times faster on these technology platforms than really boring facts. So that design allowed surveillance for profit, right, a business model that we didn’t name until Shoshana Zuboff wrote a book called Surveillance Capitalism in 2019. That just meant that we were retrofitting, we were reacting to the problems after they materialized. Now that we’re in the age of generative AI, I wonder how we can avoid being reactive. Why should the harm come first before we protect the people here? I know it’s a tough question to throw at you, but let me give you an example, for example, of like the pharmaceutical industry. There was a COVID vaccine that we were all looking for, like imagine if the COVID, the pharmaceutical companies didn’t have to first test it, that they could test it in public. So this group A, I’m going to give you vaccine A, and this group here, I’m going to give you vaccine B. Oh, group A, I’m so sorry, you died. I only say that because it is exactly what happened in Myanmar, for example, where both the UN and MEDA sent teams to study genocide in Myanmar. So can we do anything to find, to prevent these types of harms happening? And Caroline first or Jacinda? Caroline.

Karoline Edtstadler:
Well, I would say the first thing is to raise the awareness, to take it as it is, to raise the awareness and to allow people. education and give them skills to deal with that. The second thing, and this is what we are trying to do, we are doing that also in the leadership panel, is to set some legal framework in place. And I would say it should be a regulation that is not hindering innovation, because we know that the developments are quick, they are needed, and they can be used for the best of us. But we have to learn to handle them and also to handle the downsides. And now it’s said like very easily put some legal framework in place, but it’s not so easy because I’m sure that we will lag behind also in the future. And I sometimes compare that with my former profession as a criminal judge. As a criminal judge you’re sitting at the courtroom, but you never have all the information the perpetrator has. And you are always behind, but you in the end have to deal with it and you can deal with that. And I think that’s the same approach we have to use in the regard of new technologies of AI and all the things coming along. And we already proved that it is possible to do so with the DSA and the DMA and before with the legal framework we put in place in Austria. Because, maybe two more sentences to that, when I started the process in 2020 and when I invited to social media platforms to get into a dialogue with me about hatred in the Internet and what we can do against it and that we want to put up a legal framework from the parliamentarian side. Because we as democracies are represented by the parliamentarians and we are ruled by governments. They said, oh no you don’t have to do that because we are so good in handling the hatred in the Internet. We are deleting all the hate postings and so on. We don’t need a legal framework from the national state or from, I don’t know, the EU. And now we have it. And now I think almost all of them are quite okay with them. Let’s put it like that. And we are now in a process also here in Tokyo, we were in Addis Abeba, getting into an exchange, exchanging our experiences and also the expectations of society and this is a good development.

Maria Ressa:
Fantastic. Jacinda, your thoughts? Upstream solutions for generative AI.

Jacinda Ardern:
And look here, I think that that sentiment that you shared in instigating this part of the conversation around how do we put in place guardrails before the fact. This has to be, I think, one of our key take homes over the last, you know, ten years or more. And I think we’re naturally seeing, I think, a hesitancy or a scepticism in the public as a result of the fact that we’ve been retrofitting solutions to try and prevent harms after the fact. Pew released some research, I believe it was recently, demonstrating that roughly half of people were quite negative about the relative benefits of AI and those who know more are even more negative. Now I’m glad that will be because we are talking so much about the potential harms and there isn’t that same emphasis on the opportunities that exist. But I also think it speaks to the experience in recent times of the public and the fact that this is, you know, it’s relatively rare to have a field of work where just because you can, you do. As in we have the ability to develop this tech and so we push ahead even though there are those who are flagging risks and flagging harm. What I’m, I’m an optimist though and I think what I find really encouraging is that we are having these open conversations around the concerns that exist and included in those conversations are those who are at the forefront of the tech itself. And this is where I come back to the fact that I as a past regulator, I am not in the best position to tell you precisely the prescription for those guardrails. But I can tell you in my experience the best methodology to developing them. And that in my mind will always be in this fast paced environment, not to solely take a regulatory approach, although it’s an incredibly important part of the mix, but because of the rapid pace in which we see these technologies developed. And that the multiple, I think, intersections and perspectives we need at the table, that a multi stakeholder approach that includes companies, government and civil society is incredibly important. And, you know, in my mind, that is even if I can’t give you the prescription, I’m absolutely, I absolutely believe that will be the how. One other thing I did not anticipate when we set up the Christchurch call to action and when we convened a group of that nature, was the fact that the companies themselves created a natural tension amongst themselves. Those who are willing to do the least were pulled up by those who are willing to do the most. There was full exposure over, you know, those issues where they might have been up that might have said previously in a one on one, that’s not possible. You got attention there where others were, they knew that it wasn’t possible just to speak to a regulator as though they were unfamiliar with the tech or with the parameters they were operating within, because they’re in a room with those who did understand. And I think that’s particularly important in an area where this is so fast paced, it is highly technical. We need that tension, I think, in the room as well. The final thing I’d say is there are opportunities here. AI may well help us in some areas where we have previously struggled with some of those existing issues that we might might have been spoken to around content moderation, social media, and so on. And naturally, so many of these things just collide in these conversations. And so we should keep looking for those opportunities. But I, for one, always want to take a risk based approach. And I’ll always look for the guardrails.

Maria Ressa:
Fantastic. So I’m going to ask one more question. And then if you have questions, please just go to the microphones. We’re coming up on the last 20 minutes. So this last one, so we’ve tackled the first contact with AI. This we’ve looked at generative AI. And yes, the EU’s doctrine on the AI, there’s lots of doctrines that have been pushed out already. But let’s talk about the use of AI in law enforcement and surveillance. The concerns that have been raised about civil liberties, about privacy, what guardrails can we put in place to protect human rights? And I’m going to toss that first to Jacinda.

Jacinda Ardern:
Yeah, this is this is where we should not be starting from scratch. You know, liberal democracy should pull from the toolkit of human rights, privacy, you know, these these are well established rules and norms. Now, where if indeed there is any nuance in that discussion for any particular area, and often it should be relatively black and white, but if there is any nuance in the discussion, that is where civil society, in my mind, has to be at the table. And again, you know, not to harp on about the importance of the multi stakeholder approach, but let’s let’s first and foremost, not forget that we have well established human rights practices, privacy laws, and we should this should be our fallback position. Any question mark over that then civil society alongside government should be really a good pressure point in those

Maria Ressa:
conversations. And this is where I would encourage civil society to to come up stronger we must because the use of Pegasus and predator, the increasing conflicts all around us. Caroline, the same question to you what guardrails can

Karoline Edtstadler:
we also put? Well, I fully second what Jacinda said. I don’t think that we have to invent the wheels newly. There is already a human rights based order in the world, even if we see, especially since February last year that some are really disobeying everything we concluded to follow. But coming back to the to the internet and technology side, I think we have to guarantee rules based approach in this regard. And I also fully second that AI and all the other technologies can be used and are already used to the best of all of us. Think of the medicines. They are used for operations. They can do it much more precisely than a human person could ever do it. And this helps us of course. And also in the law enforcement you asked. I recently heard a presentation also in Austria before lawyers and barristers and it was also told that in the future of course law firms will use AI in finding the judgments, in structuring the knowledge quicker. But the question is to which point will we go? Will in the end be there a judge, not a judge, but some technology sitting and deciding if someone has to be sent to prison or not? So this is really where we should draw a line. And this is what we are trying within the European Union with the AI Act to structure the risks of AI. And I really do think that this is the way we could guarantee that these technologies are used for the best of all of us. And of course we also have to be clear there is always a downside. But let’s handle these downsides and then it’s better for all of us. Great. Annie,

Maria Ressa:
the mic is open for any questions from the audience. Yes, please. Do I have it? Okay. Say your name and then to whom you want to throw the question. I’m

Audience:
Larry Maggett and I’m the CEO of Connect Safely. And I guess I’m here for some advice because we are writing a parents and educator guide to generative AI. And we’ve got a journalist here, we’ve got a couple of politicians who are really good at talking to the general public. So how would you address parents, educators, people who don’t have a technical knowledge of what GAI is to reassure them that it’s probably not the end of the world, at least initially. But also warn them that there are significant risks and focus a little bit on what they can do within their own families and classrooms to mitigate the risk for those people, for the kids and themselves. Thank you.

Karoline Edtstadler:
Caroline, you want to take it? Well, I think it’s true. The reality is sometimes that children are explaining to parents how to use the phone or they are not doing so and they are simply using their phones and doing things the parents didn’t want them to do with the phones. So I think it’s also something we as governments have to try to put into some legislation or let’s say information campaigns to get the knowledge and the skills to the people. And this is of course a big big challenge because we have to also train elder people because they used these things but there is again always a downside of it. And this is something we can only do together. We had some campaigns also in Austria and some trainings for elder people and we had a lot of discussions also how to train parents. And I don’t have the answer how to do but I think this is the way forward to exchange also our experiences in different countries, what works and how it can work. Great, thank you.

Jacinda Ardern:
This is such a good question. You know I was in the generation that really sat in that really interesting transition point where you know we went from being students who were taught how to use the Dewey Decimal System to find a book in a library and once you’d figured out how to find in a book in a library you had found your fact and your resource. To then being in a period where we were of course inundated with the ability to seek information at our fingertips but we weren’t really taught I think as successfully that what we then found on that shelf might not necessarily be the fact that we thought we were finding before. And the way that my, I had a history teacher who was extremely influential for me growing up who described it as going from a hose to a fire hydrant for kids. So regardless of the particular tech at any given time, be it generative AI or whatever else we may encounter in the future, I would hope that we teach our kids to be curious. Not cynical but curious. And now the tools that we have may be giving the impression that we’re going from a fire hydrant back down to a really well refined hose but that water has been derived from a particular source in a particular way and we need to teach kids just to be curious about that. To go back not just from the information in front of you but think a couple of layers back and think critically in a couple of layers back. So I would sum it up with just curiosity in everything. I think that is going to help us with this, with the age of disinformation, with the rapid technological change and I hope create a generation that is not cynical as a

Audience:
result. Fantastic. Hi, good morning. My name is Michael. I’m the Executive Director of the Forum on Information and Democracy. It’s very intimidating to be in front of greatness but I’ll try to ask a good question. One of the themes I’ve heard today and yesterday in fact was the importance of a multi-stakeholder approach to finding solutions and my question is specifically around the participation of civil society. It’s very easy for governments to show up. It’s very easy for companies to show up, particularly in an environment where pay to play is so pervasive. Where you pay a few hundred thousand dollars, your CEO can show up and speak at an event. You can host a session in a panel. You can capture the narrative. It’s not so easy for civil society. You can’t just buy a business class ticket and get on a plane the next day and show up in an event. So if we’re going to really advance a multi-stakeholder approach, what are some solutions to ensure civil society, especially those from the Global South, can participate effectively? I like the Global South. Let’s, yeah. Well I can only

Karoline Edtstadler:
say we try really to include civil society and I think also the understanding is there that we can tackle these problems and issues only together. Not the government alone, not the parliamentarians alone, not the civil society, not the tech enterprises, but only we as a civil society together and I really mean all of us including the government. And we are doing that in Austria also. I give you an example for the implementation of the SDGs. I will go back on Wednesday and we’ll have the third dialogue forum on SDGs where we really invite also the civil society to contribute, to tell us what they are doing and this is the same here. You can’t do it bottom down. You can only do

Jacinda Ardern:
really good person to speak to this yourself, so maybe you should have a punt at the question. My very brief contribution would be that Michael, I totally agree with you. Early on in the call, you know, most of my interactions were, you know, with civil society at the table because that was what we were building. wanted to be a structure where civil society were at the table. As you say, there are some real practical things to overcome in creating a network of that nature. There are, and they may well be in the room, I can’t see the room, but if anyone from our Christchurch call network is there, I’d ask them to give a quick raise of their hand and just to share at some point, whenever it’s appropriate, their experience. We certainly have learned as we’ve gone over the last four years around how we can make it easier at a practical level and meaningful, that engagement. But the fact we are still going, and I think it is still seen as a valuable network, I hope means we’re doing some things right, but also learning as we go because we’re not perfect. But I’d hand back to you, dear moderator.

Maria Ressa:
Thanks, Jacinda. I mean, Michael, you know there are these times when civil society comes together. We have coming up the Paris Peace Forum coming in. Over the last few years, that’s been one way that we’ve been able to get civil society together, but frankly, not enough, I think. And there are many different groups, like Talin in Estonia has just handed over the Open Government Partnership to Kenya, right? We are, there are all these different groups that are working together, partly some on past problems that could evolve to take on the, you know, I’m a journalist, so information is power, and that is, to me, the top problem. If we do not solve the corruption of the information ecosystem, we cannot solve anything, let alone climate change, right? Let me throw, let’s take three questions, and then our leaders can answer. Please.

Audience:
Good morning, Svetlana Zenz, Article 19. I work in the program which actually engages civil society talking to tech sector. My main countries are China, Vietnam, and Myanmar. Yeah. So the question is the following. I mean, all the European initiatives regarding controlling and, let’s say, monitoring the private sector, especially ICT sector, working in European territories are great. And of course, it’s a human rights centric. I mean, some of the CSOs in Europe might not agree with me, but in comparison with Myanmar, for instance, they’re very good points to follow. So my question is that all the private sector which is regulated in Europe, especially with the Digital Act or Digital EA Act, how would you monitor their actions in the countries with the territory and regimes? Great. Go ahead, please. Hi, yeah, I’m Viet Vu from the DAIS in Toronto Metropolitan University. Maria, we had you in March at our Democracy Exchange and on Democracy in Power, and so it’s related to that. How do you square the fact that much of the people most susceptible to misinformation and disinformation are the kind of people who lack fundamental trust in structures and institutions? I’m sure there are strange conspiracies about what we’re doing in this room today. How do you reach those people? Great, lack trust. And we’ll take one more. Yes, and I think, I hope it’s not too big of a question, but we are being told as humanity that privacy and safety cannot coexist in the online world. We are being told that because the technology is the way it is and because we are faced with design choices that currently exist, privacy cannot be absolute if there is any consideration of safety and safety cannot be guaranteed to anybody because we have to really care about privacy. My question to you is, how can we take a step back and think about human rights and start from there and then think about design choices instead of ending up, to be honest, in very stupid debates about little technology choices, little technology bits and pieces that we need to be working on to overcome challenges and get to the place where we can have both? We really need your help as thought leaders, so any thoughts about that would be really welcome.

Maria Ressa:
Fantastic. Let me toss it, Jacinda, you first, Caroline, and then I’ll pick up some of the questions too.

Jacinda Ardern:
Yeah, I’ll try through, maybe through the last two. I’ll leave the first one to others. Just starting on that last one around the safety debate and the privacy debate. I shared very briefly one experience we had with that, but it persisted for years because as I said with the Christchurch poll, for instance, we didn’t want to just look at downstream, we wanted to go upstream. We wanted to look at those things that may be contributing to radicalization. Algorithm outcomes kept coming up, privacy then kept coming up. Well, we’ve then demonstrated with the establishment of this tool that you can overcome that debate. It did take some resource to establish this tool, but the Christchurch Call Initiative on Algorithmic Outcomes, which now has researchers who are now in real time accessing what previously we were told was information for privacy reasons we would not be able to get to. Now, the next step for us will be demonstrating that that research can prove valuable and then saying to the social media companies, well, this is what we’re learning now, what are we going to do about it? So there, I think, will be the critical next step. But the learning for me there is there are ways. It took too long, though. That was four years that it took us to really overcome that issue. But I hope that that gives some encouragement that we are pushing past it. And sometimes that creative tension I talked about in the room with other tech companies is really helpful for those debates. The second issue that, you know, right hitting the nail on the head, what do we do about those who are susceptible to disinformation? You know, we’ve seen what it can do to liberal democracies when that is writ large. We’ve had some very recent examples in a number of countries, and it is devastating. Here I track back again. Now, there are those who are doing research on this, I believe, and particularly the likes of Columbia, which are tracking back to look at what are the common themes that we’re seeing in those who are most susceptible. But instincts will probably tell us quite a lot as well. And if you’ve got an inherent distrust of the state, probably at some point the state’s failed you in some form. Now, that’s a generalization. But if there’s a general view that your economic position in life is influenced by the state, and you’re in a lower socioeconomic category, and you’re disenfranchised, or you’ve had an experience with the state, for instance, where at some point you’ve been in their care, these are some of the features that we see, and of course, educational attainment as well. Now, we need to track back then as governments think about what we can do to reestablish that trust in institutions. And it means by actually delivering for our people as they expect us to. It’s as simple as that. When it comes down to the one-on-one, I’ve tried to have conversations with people who are deep in conspiracy, and it is an incredibly demoralizing experience. That’s why I always go back to the beginning. How do we stop people falling in in the first place?

Karoline Edtstadler:
Caroline. Well, I would like to start with the second question, because I think that’s the main question for us as politicians. How can we gain trust again in institutions, in governments, in democracy as such? I would say this is also the most difficult question to be answered. We are living in challenging times. This was mentioned already several times, and people are tired of crisis, and they want to believe easy solutions. And this is really our problem, but democracy is a hard work every day, and we have to fight for the trust of the people on a daily basis. So this is the only thing we can do, and we all have to be aware of the fact that you normally cannot find a solution which is beloved by everyone. So there will always be a certain amount of people, a group or something like that, you can name it, who is not happy with the decision. But democracy means that we find majorities, and this is something which was clear in the past, and now it’s not so clear. And one of the reasons is that, and this is also going to the first question, that you can find misinformation, disinformation in the internet, that you find your group only echoing your opinion. And this is really something we found out, especially during the COVID pandemic, that is nearly impossible to get people out of such chambers if they are in these, yeah, in their opinions and surrounded by people who have the same opinion. So what we try to do is to regulate things in Europe, and we would like to be a role model also for the world. That’s why I’m very happy that I’m part of the leadership panel and that I can contribute also from my experiences in Austria, but also at the European level. And again, we are not at the end at this story. And regarding this third point, privacy versus safety, I think we need both of them. And it’s always a challenge, and it has always been a challenge to guarantee human rights. You always have the situation that the human right of the one person ends where the human right of the other person is infringed. And this is something we have to do on a daily base, and what I did as a criminal judge in the courtroom on a daily base. If someone wants to demonstrate, he can, of course, do that. But this right ends when the body of, I don’t know, another person or a policeman is injured. And also here, you have to find the balance, and this is what we have to do. So I would not be that pessimistic than the person, I think it was a woman, who put the question to me that we can do both. We have to do both.

Maria Ressa:
Jacinda has a hard stop at the top of the hour. So let me quickly answer, and then I wanna ask Jacinda for her last thoughts before we let you go, Jacinda. So the quick answer, the first question, the weakness of institutions in the global south, and the countries that you mentioned are the countries where we have seen the regression of democracy, right? And yes, in countries with authoritarian leaders, most of the time, they are using this technology to retain and to gain more power. How do we deal with that? We can talk about that more after the panel. The second one, the cognitive bias that you mentioned, it is there, but frankly, smart people think that they’re immune from the behavior modification aspects of information warfare or information operations. We are all susceptible, and sometimes, the smarter you are, the harder you fall, right? This is a problem. I think it’s a problem for leaders. It is a problem for our shared reality. This is the reason why I have spoken out a lot more about the dangers, because without a shared reality, we cannot do anything together. Finally, the last one, oh my God, I love your question, because privacy by design, trust and safety by design, when the tech companies say that they cannot, it just means they won’t, because there is no regulation, no law, no civil society pressure to demand it. We deserve better. Let me throw it back to Jacinda Ardern for her closing thoughts.

Jacinda Ardern:
Oh, look, I think that you’ve traversed a set of issues that are confronting, I think, all of us in different ways and cut across a range of other incredibly serious and important issues. How do you tackle climate change unless you have a shared reality around the problem definition? The degree to which we see information integrity issues playing out in geo-strategic issues, the fact that they’re coupled with what would be considered traditional forms of warfare. There is a poly-crisis, and at every level of that poly-crisis, we see this extra layer of the challenges presented by technological developments that we’ve seen in recent times. But I’m an optimist, and I’m an optimist because in the worst of times, I’ve been exposed to the ability of humans to design solutions and rapidly adapt and implement solutions, ultimately, for the most part, to protect humanity. And we have that within our capability. We need to empower those who are specifically focused on doing that, who are dedicating themselves to it, often at great sacrifice. We need to support regulators who are focused on doing that, and we need to continue to just rally one another in what is an incredibly difficult space. So my final note to those in the room who are working in these areas, I acknowledge you and the work you do. It is incredibly tough going, but you are in the right place at the right time, and your grandchildren will thank you for it.

Maria Ressa:
Thank you. Thank you, Jacinda Ardern. Caroline, your thoughts?

Karoline Edtstadler:
Well, I can only second what Jacinda said. Your grandchildren will thank you one day because it’s the time now to create the future, and these challenging and crucial times need all of us. And I’m coming back to what I already said. We cannot do it alone as governments. We cannot leave it to the tech enterprises. We cannot do it as politicians, no matter where you serve. We need all of us. We need to change society, to be aware of the challenges ahead, and stay optimistic. I really would like to conclude by stay optimistic. I think thinking back and learning from history, normally it took about 100 years to get used to a new technology. And we are talking about the internet, and we have got father of the internet. We serve as our chair in the leadership panel, and he invented the internet about 50 years ago. So we are halfway. It’s the right time to set the legislation for the internet. It’s the right time to be aware for the children, the parents, the grandparents, how to and what to do with the internet and all these applications we already use in our daily life, and see the positive things, how we changed our life to the positive, since we have all these technologies included in our daily life. So this is really what I try to do. I’m really proud that I have the opportunity to contribute at that level, but that doesn’t mean that it is more important than other levels. The contrary is the case. Everyone is needed in this process, and we can only do it together.

Maria Ressa:
Fantastic, and the last thing I would say is everyone in this room, you are here for the Internet Governance Forum. It is a pivotal moment, and they are so wonderfully optimistic. I’m probably a little more pessimistic, but it depends on what you do, right? It comes down to all of us, and I hate to say it that way, but it is this moment in time. Thank you so much, Right Honorable Jacinda Ardern, Minister Extatler. You guys in the room. We move to the main session. Thank you for coming, and let’s move.

Audience

Speech speed

192 words per minute

Speech length

783 words

Speech time

244 secs

Jacinda Ardern

Speech speed

182 words per minute

Speech length

3161 words

Speech time

1043 secs

Karoline Edtstadler

Speech speed

177 words per minute

Speech length

2948 words

Speech time

999 secs

Maria Ressa

Speech speed

166 words per minute

Speech length

1750 words

Speech time

634 secs