High Level Leaders Session 2 | IGF 2023

8 Oct 2023 02:15h - 03:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Deborah Steele

The analysis underscores the increasing prevalence and severe implications of misinformation and disinformation, fuelled largely by rapid developments in generative AI. This evolving technology is capable of creating synthetic content to such a complex level that it becomes nearly indistinguishable from authentic materials. This presents an enormous challenge in terms of responding to, and rooting out, misinformation and disinformation.

The situation is further complicated due to the structure of digital platforms, where algorithms dictate the type of content delivered to each user. Many users remain unacquainted with the concept of echo chambers or the algorithmic nature of their feeds. As a result, polarised information consumption is perpetuated, amplifying the dissemination of disinformation and leading to further division and misinformation.

To address these issues, the analysis suggests a holistic approach. This includes a substantial push towards enhancing media literacy. Moreover, it recommends strong political commitment to ensure the integrity of information-sharing systems, a task unquestionably challenging yet pressing in light of dwindling public trust in institutions.

Regulation also forms a crucial part of the solution, and there is a vital call for more comprehensive regulatory measures. Alongside this, technological interventions, such as advanced authentication tools, play a pivotal role. These techniques can help distinguish synthetic content from genuine, thereby mitigating some of the risks associated with generative AI.

This in-depth analysis connects with several Sustainable Development Goals (SDGs) — particularly, those concerned with industry, innovation, and infrastructure; reduced inequality; and peace, justice, and strong institutions. The study’s findings stress the urgency of proactive action to counter misinformation and disinformation and contribute positively to these universal goals. Overall, this accentuates the critical intersection of technology and societal challenges, underscoring the importance of informed governance and policymaking in this sphere.

Vera Jourova

The European Union (EU) is taking strides to regulate generative artificial intelligence (AI) effectively and counteract disinformation’s proliferation. These efforts are primarily aimed at high-risk AI applications, particularly deep fakes, which digitally manipulate videos or audio that portray individuals saying or doing things they did not. Without clear labelling or watermarking, such deceptive content has the potential to significantly influence public perception and inflict harm, notably in electoral processes.

Online harassment and hate speech, specifically those aimed at women, racial and ethnic minorities, and the LGBTQ+ community, are noticed to be increasing. This issue requires immediate attention. However, it involves the challenge of identifying and eliminating such malicious content without infringing upon the principle of freedom of speech, which the EU firmly upholds.

The EU, therefore, turns to legislation such as the Digital Services Act to obstruct illegal online content monetisation and forestall disinformation dissemination on the Internet. A legally binding act, it offers a robust structure for enforcement, complete with penalties for contraventions. Alongside this, the EU is also advocating for a directive against violence towards women, particularly emphasising digital violence.

Along with regulation, the EU acknowledges the role of technology companies and social media platforms in controlling disinformation. The EU urges these corporations to assume responsibility by boosting fact-checking capabilities and complying with a Code of Practice against disinformation— a series of voluntary commitments designed to combat misinformation online effectively.

Backing for independent media also comprises a crucial aspect of the EU’s wider strategy. The EU prioritises the capability of media outlets to deliver factual information, thereby aiding citizens in making informed and free choices. EU policy supports these outlets, strengthening their capacity for accurate and autonomous reporting.

Within the fight against disinformation, media literacy is recognised as a long-term challenge. The EU heavily funds media literacy, working in close collaboration with member states on numerous projects designed to enhance the media literacy skills of the populace.

Furthermore, the EU emphasises the necessity for global democracies to cooperate in crafting international rules. It suggests that collaboration on G7 level could be an efficient way to address issues about the AI code of conduct. This global cooperation is perceived as a pivotal step in creating equitable standards across jurisdictions.

The EU’s collaborative approach to regulation, involving civil society, local media and academia, underlines its commitment to balancing technological advancement with public welfare. EU emphasises the need for broader protection for citizens’ minds and souls—informed by previous experiences—protecting consumers’ rights and welfare.

Lastly, the EU anticipates stricter regulation for political advertising online. Transparency is the central theme here. Citizens should comprehend the content they consume and not fall for manipulative information. Transparency in online political advertising isn’t simply in line with the broader endeavour to rein in disinformation; it is also considered crucial for promoting a healthier, more democratic society.

Randi Michel

Artificial Intelligence (AI) technologies have emerged as a proverbial double-edged sword, celebrated for bolstering innovation yet highly criticised for its exploitation in disseminating disinformation, thus threatening human rights, democracy, national security, and public safety.

In response to this grave concern, the US government emphasises the importance of transparency and public awareness on synthetic content. Efforts have resulted in securing commitments from 15 leading AI enterprises to advance responsible innovation, alongside the crucial development of guidelines, tools, and practices to authenticate digital content. These initiatives depict positive strides in mitigating false narratives.

The engagement of civil society, academia, and private sector is deemed imperative in addressing issues associated with AI-generated media. Strengthening local and global cooperation is key in safeguarding people from the adverse effects of fabricated or manipulated media. Meaningful dialogues are ongoing with top AI experts, consumer protection advocates and civil society organisations, reinforcing a collaborative approach.

On the technological front, tools are instrumental in waging a war against falsehoods. AI’s role is monumental in identifying and labelling artificial content. However, voluntary commitments from AI companies are viewed as inadequate to circumvent associated risks, prompting the administration to formulate an executive order and advocate bipartisan legislation to guarantee responsible use of AI.

In the global context, a unanimous norm is deemed necessary to tackle the given issue, calling for technology companies to showcase greater transparency. Furthermore, multi-stakeholder engagement is identified as crucial, echoing calls for a collective effort. To build resilience, the US State Department announced the Promoting Information Integrity and Resilience initiative offering technical assistance to organisations and capacity-building to local governments and media outlets.

Summing up, while efforts are underway to enhance transparency and regulate AI misuse, there’s an explicate call to ensure these measures do not curb internet freedom or lead to censorship. The commitment to uphold human rights and democratic freedoms remains paramount. All in all, the analysis portrays a multifaceted issue necessitating engagement of multiple sectors and nations, responsible innovation, and the establishment of international norms, all whilst respecting individual freedoms and rights.

Nic Suzor

The analysis provides insightful revelations centred on complex topics such as content moderation, misinformation, AI innovation, synthetic media, and proposed solutions among others.

The task of content moderation and the authentication of media are confronting grave challenges in the era of rapidly advancing technology, especially with developments in AI and the emergence of synthetic media. The intricacies of this landscape are further cemented by the perplexing issue of making a distinction between misinformation and disinformation, necessitating a meticulous exploration of the contributors to the distribution of harmful false material. Evidence of this conundrum arises from misinformed ordinary users and malicious actors alike, both of whom may unknowingly foster the circulation of detrimental content.

Notwithstanding these challenges, advancements in AI and generative AI, initially perceived negatively due to their contribution to synthetic media’s creation and dissemination, also hold many potentially beneficial applications. These technological improvements remain inherently neutral, suggesting their suitability for practical application in our progressively digitised age. However, it becomes increasingly crucial to mount a robust response against the potential repercussions of these innovations, particularly due to the difficulty in identifying and labelling AI-generated synthetic media.

Significant attention is garnered by the role of tech companies in regulating misinformation. The adaptability of these firms in combating misinformation, as evidenced during the Covid-19 pandemic, underpins their potential for positive impact. Nevertheless, these roles invite controversy, primarily owing to the firms’ reliance on technical responses rather than human-centred strategies.

Regulation of false information presents a complex challenge, particularly around distinguishing between parody, satire, and acceptable speech. These complexities underscore the inherent challenges within the legal and regulatory framework in addressing this issue. As such, the Oversight Board, an emerging platform for discussing content-related issues, is viewed as a promising solution, notably given its active involvement in cases pertaining to digitally manipulated media.

Amidst these technology-driven changes, there is a need for focused attention towards marginalised communities. Technology risks perpetuating existing inequalities, and there is a call for tech firms to proactively implement measures. The development of comprehensive system safeguards and protections is strongly advocated, noting that vulnerable individuals often bear the brunt of misinformation and abuse.

Multistakeholderism’s importance in governance is accentuated, emphasising that no single solution suffices for the pervasive problem of harmful content. Despite their limitations—including censorship utilised to maintain power—state governments’ role is sizeable, making a considerable proportion of total content removal requests. Thus, civil society’s active role is critical in resisting state-imposed censorship.

In summary, these findings offer a comprehensive perspective of the challenges and potential solutions confronting content moderation, misinformation, and the role of AI in our progressively digital world.

Maria Ressa

The comprehensive review reveals serious concerns about the negative impacts of Generative AI, which reportedly leads to the weaponisation of human emotions, more specifically fear, anger, and hatred. Allegedly, the first human encounter with AI fostered this proliferation of negative emotions. Furthermore, Generative AI is linked to triggering an epidemic of loneliness and isolation, with glaring evidence drawn from instances of suicide where the detrimental influence of AI has been explicitly cited.

Although demonstrating an appreciation for technological innovations, the identified sentiment underscores the necessity for responsible application of technology. The analysis further outlines the imperative need for robust regulation and enhanced public protection against the misuse of AI. Tech start-ups are highlighted for falsely propagating AI as a preferable companion, a misleading and dangerous perspective intensifying its misuse.

Detrimental impacts of technology extend prominently into the realm of disinformation. A disturbing revelation from MIT posits that lies spread six times faster than the truth, seemingly by design. This problem is particularly exacerbated on mainstream social platforms like Twitter and Facebook, where safety measures to curb unchecked dissemination of misinformation appear to be in rollback.

The predicament becomes more severe in the context of the Global South, where populations are more susceptible to misinformation due to weaker institutional structures. Despite voluntary attempts to curb disinformation implemented between 2016 and 2018, these measures have failed, intensifying the call for more robust regulations.

However, the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence amidst the rapidly evolving technological landscape. Tech platforms are nudged towards accepting responsibility for the harm they cause and moving towards enhanced transparency and accountability. The newly introduced Digital Services Act (DSA) which provides real-time data, revealing potential harms is recognised as a progressive step towards adjusting to a world now revolving around data.

The analysis furthermore exposes a collective call for citizens to redefine their engagement in the digital era, moving from being mere users to active role players. This sentiment resonates strongly with Maria Ressa’s personal experience of being targeted by hate messages, threats, and legal battles.

Tech companies are urged throughout the discourse to moderate their greed, reconsider their business models, and take definitive actions to counteract digital harm. The analysis culminates on a hopeful note, emphasising the role of governments, now aware of the challenges posed by the tech industry and the urgency of accelerating their response accordingly.

In essence, the extensive summary reflects a critical evaluation of technological progress, a vital emphasis on the necessity for ethical standards, and enhances the vital role of governments, tech companies, and active citizen participation in managing the myriad challenges posed by the evolving digital landscape.

Tatsuhiko Yamamoto

Generative AI technologies have emerged as key contributors to the propagation of misinformation and disinformation in our society. These systems, by virtue of their capabilities and the considerable amount of false content they can produce instantaneously, hold the potential to interfere with people’s ability to make independent decisions and disseminate biased or incorrect information on a grand scale. The examined findings also underscore the potential for societal ‘hallucinations’ or mass deceptions triggered by the misuse of generative AI. However, these issues are complex and call for multidimensional and intricate solutions, particularly in the face of the potential ‘tsunami of disinformation’ that generative AI can engender.

The attention economy model further intensifies the challenges posed by misinformation and disinformation. This refers to an environment where a plethora of information is constantly vying for limited user attention, with misinformation and disinformation often securing victory due to their heightened engagement appeal. This dynamic spearheads the propagation of false or misleading content, thereby providing a fertile landscape for its growth and influence.

Tech companies, seen as pioneers in this digital age, have a significant role to play in mitigating this disinformation crisis. Indeed, there is a pressing call for these corporations to bolster their commitment to combat false information, featuring reports from fact-checking organisations more prominently on their platforms. Strengthening corporate responsibility, coupled with enhanced collaboration in fact-checking efforts, marks the way forward. Yahoo Japan’s alliance with Japan Fact-Check Center is one such successful precedent in combating the spread of misleading content.

Importantly, there is a universal agreement on perceiving misinformation as a structural issue requiring targeted redressal at its very roots. This necessitates an enhancement in data literacy and the development of technological standardisation. The expanding power of tech companies, becoming increasingly domineering, has been identified as a critical area necessitating immediate action. ‘Digital constitutionalism’ emerges as a novel regulatory concept offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.

In addition to technology-centred solutions, the importance of ‘information health’ is stressed, advocating for a balanced and unbiased intake of data likened to maintaining nutritional balance in food consumption. Literacy heightens its significance as an essential tool in combating the misinformation epidemic. An improved awareness and understanding of data intake could instigate a consequential transformation within tech companies known to disseminate harmful or misleading information for profit.

Nisa Patria

The digital landscape is exhibiting a worrying trend with the surge in internet usage and AI technologies playing a significant role in heightening the spread of misinformation and disinformation. An alarming 62% of internet users in Indonesia have encountered false or questionable online content, casting doubt on the trustworthiness of news circulated via social media platforms.

In response to these concerns, Indonesia has devised an all-encompassing strategy. Central to this plan is a national digital literacy movement aimed at educating users and fostering greater discernment towards online information. An intermediate level procedure has been introduced to debunk hoaxes, ensuring false assertions are challenged and the truth prevails. At the downstream level, they have intensified law enforcement activities, holding those propagating misinformation accountable for their actions. This comprehensive strategy presents a benchmark for other nations grappling with misinformation and disinformation.

There is a broad agreement on the need for improving digital literacy worldwide. By equipping individuals with the skills to effectively differentiate between true and false in online contents, societies can proactively tackle disinformation, a process known as “pre-bunking”. Furthermore, the current circumstances call for a revised governance structure that rewards sharing accurate information, aiming to dismantle the appeal of spreading false news.

Moreover, the call for collaborative efforts to consistently counter misinformation and disinformation is unequivocal. Especially in the face of emerging technologies such as generative AI, global efforts to embrace such advancements offer a significant defence in this digital battle. To summarise, it’s clear that countering misinformation and disinformation calls for a multifaceted approach that includes education, rigorous enforcement, governance alterations, and international cooperation.

Paul Ash

The analysis underscores a significant and urgent need to address disinformation issues, which have been exponentially fuelled by advancements in AI (Artificial Intelligence). Such disinformation’s burgeoning prevalence is seen as a potential threat to democratic infrastructures and institutions and can also undermine essential human rights. These concerns directly pertain to SDG (Sustainable Development Goal) 16, which promotes Peace, Justice and Strong Institutions.

In this context, the analysis conveys a strong sentiment favouring the development of all-embracing solutions. These solutions must integrate governments, various industry sectors, and notably, civil society. There is a key emphasis on engaging communities impacted by disinformation from response formulation’s initial stages. This inclusive approach aligns with SDG 16’s broader aspiration for peace and justice.

The analysis consensus is that international human rights law should be the foundation of measures taken. This reflects the sentiment that human rights principles underpin everything and that compromises on information integrity could potentially undermine them.

Moreover, the analysis advocates for a value-led institutional response involving multiple stakeholders. This ethos aligns with SDG 17, which calls for global Partnerships for the Goals. Whilst recognising that multilateral action often lags behind technological advancement’s rapid pace, the captured sentiment supports a unique perspective. The conclusion drawn suggests a gradual progress from voluntary commitments towards a formal, ground-up regulatory framework.

The experience gleaned from the Christchurch Call incident, notably highlighted in the analysis, stresses the need for an authentic multi-stakeholder response. This point underlines the essential role of comprehensive cooperation and collaboration in effectively addressing the global disinformation challenge.

In summary, the analysis portrays the pressing necessity to confront disinformation proactively, harnessing the principles of SDG 16 and SDG 17 in formulating solutions. It advocates for a cooperative approach encompassing all society sectors and honouring international human rights laws. Despite inherent challenges, it encourages forward momentum from voluntary action towards regulatory norms, propelling us closer to a digital world characterised by peace, justice, and robust institutions.

Session transcript

Deborah Steele:
you would take your seats please. Hello and welcome to High-Level Session 2, Evolving Trends in Misinformation and Disinformation. I’m Deborah Steele, the Director of News at the Asia-Pacific Broadcasting Union based in Kuala Lumpur and a former journalist and news editor at the Australian Broadcasting Corporation. We’ll continue on. Misinformation is the unintentional spread of inaccurate information shared by people who are unaware they are passing on falsehoods. Disinformation is deliberately falsified content that aims to deceive. It is a deliberate act. This year, the enormity of the challenge in responding to both has skyrocketed. Generative AI has transformed what is possible and the scale of risk. For decades, AI has automated tasks such as detecting and completing patterns, classifying data, honing analytics, and detecting fraud. But generative AI is a new paradigm. It can re-version and summarize and create new content. Some of it serves a very valid and beneficial purpose, but some is synthetic. Generative AI has developed a new paradigm of synthetic content. Synthetic content refers to the artificial production, manipulation, and modification of data and media for the purpose of misleading people or changing an original meaning. Examples we have seen already include an image of a man on the grounds of the Pentagon, a video of the Ukrainian president telling Ukrainian troops to surrender, a video of Donald Trump in an orange prisoner jumpsuit, celebrity porn videos, and pictures of Pope Francis wearing a Balenciaga puffer jacket. All of these were, of course, fake, but they looked real. As a result, there’s growing concern about the potential use of generative AI to spread false narratives, to spread lies and disinformation. There are new developments in generative AI almost every day, and predicted timelines have been smashed. In some cases, innovations that were predicted to take five years took less than six months. And so we can expect a massive surge in online content of this type, or what has been described as a tidal wave of sludge. This is happening at a time when information consumption continues to be highly polarized. This is exacerbated by the way in which online algorithms determine what you see, content that aligns with your pre-existing beliefs, limiting exposure to other perspectives, thereby reducing critical thinking and reinforcing echo chambers. Most people don’t realize their feeds are determined by algorithms, and they don’t even realize they’re in an echo chamber. The echo chamber effect not only amplifies the spread of misinformation, it also makes it difficult to engage in constructive conversations based on accurate information. All of this at a time when, as we heard in the first session, public trust in institutions is sliding in many countries. And this is why we’re here today, to get the balance right between the opportunities presented by new technologies and platforms, and the need to limit risks. Addressing misinformation and disinformation requires a multi-pronged approach, from improving media literacy to making a political commitment to protect the integrity of information sharing systems. It requires regulatory measures and technological interventions, including authentication tools. So now, to our panel. And first of all, let me say we have an apology from Salima Bahr, the Minister of Communication, Technology and Innovation in Sierra Leone. Unfortunately, she is unable to join us today, so she has sent her apologies. To the rest of our panel, Mr. Tatsuhiko Yamamoto, Professor of Keio University Law School. Dr. Ms. Vera Jourova, European Commission Vice President for Values and Transparency, whose work includes ensuring democratic systems are protected from external interference. Ms. Maria Reza, journalist, editor and founder of Rappler.com, and the 2021 Nobel Peace Prize Laureate. Ms. Randi Michelle, Director of Technology and Democracy at the White House National Security Council. And my fellow Australian, Mr. Nick Souza, member of the Metta Oversight Board, the body that people can appeal to if they disagree with decisions Metta makes about content on Facebook or Instagram. So let’s start with our first question, Mr. Tatsuhiko Yamamoto, Professor at the Keio University Law School. Advancements in generative and intergenerational AI are producing new information with a higher degree of complexity. What’s the impact on

Tatsuhiko Yamamoto:
disinformation and misinformation? Thank you very much. I just want to speak in Japanese because this is my language. Well, generative AI is a very tasty, poisonous apple. That’s how I call it. Why I did so? Because generative AI content has some biases or mistakes or wrong information. In other words, poisons are embedded. However, on the other hand, it’s smooth and it’s very tasty. Tastes like something that human prepared for you so you can keep on eating one apple to another. So I would say this is very close to very tasty, poisonous apple. Now this poison, however, is going to eventually apply our cognitive process so that we won’t be able to make our independent decision making for ourselves. And also, if people are poisoned, those who are poisoned, if they speak something or the generative AI which contains poison are used as a teacher training data, then the poison would actually infect all of the society. So that’s the concern that I have. And also generative AI can have misinformation or disinformation in a very large quantity instantaneously. So the information tsunami can be created or a tsunami of disinformation can be created by the generative AI. So our challenges, therefore, are becoming very complicated. So this is a hallucination of the entire society. Then collective hallucination can happen. Hallucination can happen. So that’s something I would call.

Deborah Steele:
Maria, to you.

Maria Ressa:
Thank you. I completely agree, but I would push it one step further, which is not only do we lose free will, which is what our colleague said, it essentially hacks our biology. It weaponizes our emotion. Let me first start with the first time that’s happened, which is in social media. And what was weaponized at that point in time with the first contact, our human contact with AI, was our fear, our anger, our hatred. Tribalism is the code word that we use, but it’s essentially us against them or radicalization. I came from studying how the virulent ideology of terrorism of Al-Qaeda seeped through our society. How did they radicalize people? How did a person, an Indonesian, become a suicide bomber because of that? Well, in many ways, fear, anger, hate, tribalism, all of these things that separates the person from their family and their community, this is actually what was weaponized. So that’s the first one. Generative AI, which was released in November 2022, goes a step further, and this goes hand in hand. The U.S. Surgeon General came out with a report in May where finally the harm to children was brought up publicly after so many studies had shown it. But here’s the part that I thought was fascinating, the epidemic of loneliness. So how will it hack our biology this time around? Remember the first time, fear, anger, and hate? The second time, generative AI is going to be loneliness, that seed of loneliness that is in each of us, and you have seen this now. So from November until today, we’ve seen people commit suicide. We’ve seen crimes committed. There are lawsuits that are out, and there’s still impunity in terms of protection for the public. It’s very easy to think this person, this AI, generative AI, is real, and at 2 in the morning, when you’re being, when you’re looking at it, you turn to it, some of the startups actually say, here are these people, this is your friend, your friend that will be a better friend to you than anyone else. This is dangerous, and not only is it dangerous individually, I think it’s dangerous for our society. Having said that, I’m not a tech luddite. I love technology, and we were one of the first adapters in our country, but I think we need to see right now this is a moment that is critical for the world, and we must take the right steps forward.

Deborah Steele:
Thank you, Maria. Ms. Vera Jourova, what are your thoughts on the impact of generative AI on disinformation and misinformation?

Vera Jourova:
Thank you very much. Can you hear me? Yes. Well, I represent here the EU, which is the European Union legislator, so maybe I will not enrich the fantastic analysis which we heard from Professor and from Maria, and rather to share with you how the EU is doing in regulating the space. First of all, what we see is that the generative AI plays an increasing role in the context of disinformation. How we tackle the issue of disinformation, you said that it’s an intentional spread. We speak about intentional production of disinformation, and this is our definition of disinformation, which is dangerous and where we need to regulate, and it’s when the disinformation is being produced in a coordinated manner with the intention to do harm to the society, and the harm we define as the harm to security of the society and to electoral processes, so to elections. We can imagine that the combination of disinformation as this intentional production to do harm, using the AI, and especially generative AI, it’s a dangerous cocktail to drink. That’s why we are now addressing this issue. reacting on it by including the new chapter into the AI Act, which we are now finalizing in the legislative process in the EU, where we want to introduce several principles which have to be maintained. First of all, that the AI must not start to govern the people. The people have to always govern the AI, which means that the human beings are at the beginning of the development of the technologies over the life of technologies. So having the chance to look into the algorithms and to guarantee that dangerous technologies or uses of technologies will be stopped, and as well as at the end. So we have three categories of AI, which is low risk, medium risk, and high risk. And especially for the high risks, we want this increased control. Also, we have several case uses which are unacceptable in the EU legislation. Coming back to generative AI, we believe that at this stage, it’s very important to say that the rights which the human beings have developed for themselves, for ourselves, like the freedom of speech, the copyright, and I could continue, must remain for the real people. That we must not give these rights to the robots. I’m simplifying horribly. But this is very, very important in our philosophy that what belongs to human beings have to belong to human beings. Coming back, and I will stop here, to our AI Act, where we are adding now the chapter on generative AI. We have a very strong plan or vision that the users have to be informed that what they see and what they read is the production of AI. So we insist on labeling of such texts and images, and also some watermarking of the deep fakes so that the people immediately see that this is it. And also the deep fakes used in the, for instance, electoral campaigns, in case they have the potential to radicalize the society or heavily manipulate the voters’ preferences, in my view, they should be removed or stopped. But this is still in the making. We are now discussing with the co-legislators.

Deborah Steele:
Thank you. Thank you. Ms. Randy Michelle, Director of Technology and Democracy at the White House National Security Council, what are your thoughts on this?

Randi Michel:
First, I want to thank the organizers of IGF and of this timely panel for having me here today. It’s an honor to share the stage with such distinguished panelists. I want to thank all of you for being here to discuss such an important issue. Generative AI technologies lower the cost and increase the speed and scale with which bad actors can generate and spread mis- and disinformation, including deep fakes and other forms of synthetic material. And as AI-generated content becomes more realistic, this content can threaten human rights, democratic institutions, national security, trust, and public safety, especially to women and girls, LGBTQ plus communities, ethnic and racial minorities, and other marginalized groups. For example, inauthentic content or even claims of inauthentic content in the context of elections may erode trust in democratic institutions and put the credibility of electoral processes at risk. In fact, just recently in Slovakia, a deep fake appeared to portray audio of a political candidate discussing how to rig the election. While the media was able to fact check the recording, the audio was released during a 48-hour media moratorium ahead of the election, presenting unique challenges to address the synthetic content. In response to these evolving risks around the world, the US government is working to increase transparency and awareness of synthetic content. The Biden-Harris administration has secured voluntary commitments from 15 leading AI companies to advance responsible innovation and protect people’s rights and safety. This includes a commitment from these companies to develop mechanisms that enable users to understand if audio or visual content is AI-generated, including issuing and authenticating content and tracking its provenance, labeling AI-generated content or both for AI-generated media. Building off of that, we are working to develop guidelines, tools, and practices to authenticate digital content and detect synthetic content. We’re working to build the capacity to rapidly identify AI-developed or manipulated content, while at the same time sufficiently labeling our own government-produced content so that the public, including the media, can identify whether contact that claims to be coming from the US government is in fact authentic. These measures include, for example, digital signatures, watermarking, and other labeling techniques. Together, these efforts will help us implement safeguards around AI-generated media that could mislead or harm people. But the US government alone cannot protect people from the risks and harms posed by synthetic or manipulated media. We need to work together with our partners and allies to ensure that these safeguards are implemented around the globe. And it was great to hear my colleagues from the European Commission speak about this as well. We hope that other nations will join us in these efforts to establish robust content authentication and provenance practices. By working together, we’re hoping to establish a global norm that enables the public to effectively identify and trace authentic government-produced content and identify artificial intelligence-generated or manipulated content. And most importantly, we are prioritizing engagement with civil society, academia, standard-setting bodies, and the private sector on this issue. Forums like today’s are an important opportunity to bring together a wide range of stakeholders. President Biden and Vice President Harris have repeatedly convened top AI experts, researchers, consumer protection advocates, labor and civil society organizations on this topic. And we look forward to many more conversations to come. Thank you.

Deborah Steele:
Thank you very much. Nick Souzaur, what’s your perspective as a member of the Metta Oversight Board?

Nic Suzor:
Thank you. And thanks to the panelists so far for articulating a really quite concerning set of issues that are seriously challenging our existing responses to content moderation, but also the authentication, the flow, the trust of media in our ecosystem. I wanna start by apologizing. I know we’re only two or three hours into day zero, but I’m going to take us, I wanna make two points that I think will really, although I hope to maybe provide a little bit more technical detail, but also more pointed set of challenges for us. It’s easy, I think, to talk about AI and generative AI at a high level. What gets really complicated is as soon as we start to unpack the types of responses that we’ve spoken about, the types of content that we’ve spoken about, the types of relationships that we’ve spoken about. So the first point is mis- and disinformation. The introduction to the panel provides a way of thinking about these two things as separate concepts that disinformation is intentional and misinformation is not. Now, that distinction, it makes sense when you’re thinking about rules and punitive regimes and attributing fault to people for spreading intentionally disinformation. But when you look at how disinformation spreads, it spreads through the actions of both malicious actors, but also mainstream media, social media platforms that are optimized for engagement, ordinary users who are just participating in the debates of the day, and they all play a huge role in enabling harmful material, harmful false material to circulate. So it’s really hard, first off, to make that distinction. The second is what even is synthetic media? And this is really hard because we can talk about labeling. We can talk about the importance that people are made aware of what is generated by AI and how AI systems prioritize and shape the way that information is presented to them. But when I write an email, I rely really heavily on autocomplete. When I take a photo, I use a lot of post-processing. If I’m removing someone’s face who I no longer wanna be associated with before I post it. Now, it’s not very long where we live in a world where all media fits that description. All media is manipulated. And so there is an internal limit, I think, to where we can get to with pure, with approaches that focus on labeling and authenticity, when we have to accept that the changes, the innovations that we’ve seen over the last year with generative AI, a lot of them are here to stay. People are going to find really cool, interesting, useful uses for them. And so we need to make sure that we don’t, we’re not confusing, I guess, the issues when we start to figure out what sort of solutions might work in one context and making sure that they’re appropriate for the digital age. That’s gonna be tough.

Deborah Steele:
Wonderful, thank you. Just at this point, I would like to remind everyone that the number that you see in the top of your screen is the time allocated to speakers. So if you are wondering why they’re not expanding further on some points, it’s because of the timekeeping that we’re trying to do to manage this discussion. Moving on now to the next question, and to you, Mr. Yamamoto, public concerns over misinformation have long existed coming into view more recently in the context of political campaigns and disinformation. Where are we now in combating misinformation and disinformation online? What are the lessons learned so far?

Tatsuhiko Yamamoto:
Thank you very much. I believe there are two directions that are now beginning to appear. One is that the issue has become more serious than before, and the second is that we’re beginning to see lights of hope. So there are actually diversely different directions, but the issue has become serious because the attention economy, which is the business model of the platform economy, has become so predominant. The attention economy meaning, I’m sure you know about the attention economy, but it means that we are in this flux of information, in this flood of information. Against the amount of information that’s supplied to us, the time that we can devote to a piece of information is very limited, and therefore focusing on such valuable information is creating a business. And so it can but rob the engagement, or stealing the engagement in the eyes and ears of the user has become priority. And so we have algorithms and recommendation systems which are now more and more sophisticated to steal users’ time, and that has led to the echo chamber and filter bubbles. And this is leading to the amplification of misinformation and disinformation. And under the attention economy, fake… news or disinformation can win engagement, higher engagement, which will mean it can win more profits and, therefore, tends to be disseminated more. And that’s really aggravating the situation. On the other hand, we do see signs of hope, and that is that this information pollution, the issue and awareness of this information pollution is now shared across borders. And also, there is collaboration and cooperations amongst the public and the private sector, and there’s a consensus on improving data literacy and education. And the issue of misinformation and disinformation should be, I believe, understood as a structural issue. The structural ecosystem of the attention economy must be dealt with at the roots, I believe. So that’s one point. Also, we have to tackle with the structure and the ecosystem, and to do that, for example, technological standardization may be one way. A multi-stakeholder, the OP, Original Profiler. OP stands for Original Profiler, which means it’s accredited to place the Original Profiler, OP, on authenticated data that’s generated by authenticated organizations. And so that sort of technological standard must be discussed at the global level. And also, literacy education will be very important. I believe this will become critical if we want to tackle the structural issue. Thank you.

Deborah Steele:
Thank you very much, Kwaitso. The issues of media literacy, and as was mentioned in the first panel this morning, the importance of public understanding of these issues are key to. helping to reduce the risk. Nick, from your point of view, what are the lessons learned so far?

Nic Suzor:
I think, well, it’s been a busy few years for mis and disinformation, from election interference and into a global pandemic. There’s a lot of pressure around the world for platforms and tech companies to come up with things that actually work. The challenge is it’s really hard to know what actually works. One of the things that I think is most pressing is that we still don’t really have a lot of research on things like inoculation and media literacy, for example, that we were talking about. Absolutely, increasing people’s ability to detect and understand and correctly interpret the media that they’re presented with is a really important goal. We’ve seen, during the global pandemic, we’ve seen a couple of things that I think you could draw out as lessons, that one, tech companies can do something, right? For a long time, tech companies have said that they don’t want to get involved in questions of truth, in questions of content and opinion. The urgency of responding to disinformation in the context of the COVID-19 pandemic made it impossible for tech companies to continue down that road. I think we’re at an interesting point now where it’s clear that tech companies do have a role, but also that that role is highly controversial. And the requirements of tech companies to make decisions about how people talk, particularly as you go into more of the disinformation side of that spectrum, is incredibly complicated. Tech companies have become good at using, I would say, spam reduction. techniques and so on to recognize coordinated inauthentic behavior, but what to do about people spreading falsehoods? That’s a social question and we don’t know really what the answer is. I want to put in a plug, on Tuesday the Oversight Board is going to announce that we’re taking a case about a post by a Facebook user. We’re posting a digitally altered video of the US President Biden and this video has been around for a while, but the board has will have a public comment process because this is really a conversation that we have to open up. It cannot be a conversation that is left to tech companies and to technical solutions because the problems ultimately are social problems. I really want to encourage people here, if you can, to help us out because as we start to work through, I think the board’s an exciting way that we can bring some of these issues to light and bring forward the conversations, the sophistication of that conversation. I’d really appreciate the input from those of you in the room and certainly those on the panel if you can help us figure it out, but there are big outstanding social questions about where we draw the limits between parody and satire, between acceptable and non-acceptable imitation, between acceptable and non-acceptable speech on private platforms, particularly in election contexts where that’s so

Deborah Steele:
important, not just in the US but around the world. I’m sure you’ll get lots of ideas from your other panelists shortly. Moving on, women and girls, refugees, racial and ethnic minorities and LGBTQ plus people usually bear the brunt of harm caused by online disinformation and misinformation intended to target them. Just a couple of weeks ago we saw the case of Spanish schoolgirls being misrepresented in porn videos. Given current trends, what can we do to protect and empower these communities and what tools can they use to protect themselves? Vera, would you like to go first on this one? Well, I have so many

Vera Jourova:
things to say on the previous topic, but I will be disciplined and answer what you ask about. Well, with social media and internet, we unfortunately see an incredible increase of shameful practices and they have usual victims. Women, especially women in public space. politicians, journalists, judges, the leaders of the NGOs, when they open their mouths on Internet, they are immediate targets of horrible attacks. LGBTIQ people, of course every kind of minorities, everyone who is different, different from who, different from what, answer yourself, are the immediate targets. And when we see such a level of aggression, of course the law has to react. And in the EU we have very strict rules. What’s illegal? And these are aggressive and very often illegal attacks. It’s not about satirical or offensive content. No, these are the messages and reactions inciting violence. It’s illegal in the EU. After horrible experience from the previous century, when the Holocaust started with words and the reaction of the society was passive, we have this experience. So we have the legislation in force for offline wealth, which says which content is illegal and it contains hatred against individuals or groups of citizens. So our mantra in the EU is what’s illegal offline has to be treated as illegal online as well. And here comes what the professor mentioned, like, how did you say it, attention economy. I would call it dirty business. When, of course, for those who are running the algorithms, the big tech, for years they were making big money on hatred and alarming news and apocalyptic visions and, of course, also dangerous disinformation. And the EU, we sat around the table with the big tech and we discussed that they cannot continue like that. And for some period of time we had a code of conduct against hate speech and code of practice against disinformation. And now we have the legally binding Digital Services Act, which says this, you have to resist making money on illegal content and on disinformation, which is dangerous for the European societies. And it will be under the enforcement structure. There are penalties. And so we mean it seriously in the EU that we cannot continue just passively watching what’s happening online against the selected groups. Last word on women. For me, the Digital Services Act is not enough. So I proposed the first ever European directive against violence against women, which contains a very strong chapter about or on digital violence. And it is now in the legislative process. And I believe that once the law enforcement authorities, like police and prosecutors in the EU member states, will have these laws in their hands, they will enforce more the fulfillment of the rules, which should protect women and everybody else who is the target.

Deborah Steele:
Thank you. Tatsuhiro.

Tatsuhiko Yamamoto:
Thank you very much. Yamamoto-san, what do you think about this? Yes, thank you. I like the way you use the word attention economy that I used earlier. But under this business model, hatred and also fear as well as anger, I think, can gain more engagement. Therefore, in that sense, under the attention economy, the communities that you refer to are placed in a very vulnerable situation. And I think that was the case in the physical real world. And now that will become more serious in the online world. And hate speech and disinformation and misinformation, when they’re combined, it could become a very impregnable force, a difficult-to-solve cause. And if it’s focused toward one individual, then I think a speedy moderation would be the solution. But for such acts addressed to the community, there are several solutions. One would be to have some human rights society or organization that has the trust of the world. can report from such individuals, can start fact-finding, and also to, as an organization, issue an article on the fact-finding results of such misinformation as a reputed organization. And these fact-checking articles must be read, must be read. So for that, the platform companies should be engaged, I think, to enlist their cooperation so that we can issue these articles. And I think it’ll be the responsibility of these platform companies to provide fact-checking articles on misinformation and disinformation to the community, and then also to feature them prominently. And I think that that’ll be their responsibility, I believe. Now if I may cite a case of Yahoo Japan, which is a news platform here in Japan. And they have the Japan Fact-Check Center, which was established last year. And they will also be sharing articles issued by such a center. So international fact-checking organization and the enlisting of media platform companies, I think, would be critical.

Deborah Steele:
Terrific. Thank you very much. Maria, your thoughts, please.

Maria Ressa:
So many. I mean, let me pick up from Professor Yamamoto. Let me just pull up some of this stuff, and I will hit my three minutes. So first, when he says structural design, it is the design. It is, by design, meant to spread lies faster than the facts, right? So in 2018, MIT said lies spread six times faster. But now, in a new world that we have today, it’s probably significantly worse because all of those safety measures have been rolled back. Twitter, now X, Facebook, now Meta. They’ve changed their names, but it’s gotten worse. So that’s the first step. And Vera talked a little bit about the victims of it, that this is illegal in the real world. The online violence is real world violence because your mind, your person on the cell phone is the same as the person walking in the real world. There’s only one person that’s being influenced by all of this. And I think the key part to this, let me, with all respect to Nick, I have many friends on the oversight board, but I think what we’ve seen from the platforms is the three Ds, deny, deflect, and that leads to delay, which means more money, right? This is if more lies spread than facts, you don’t have facts, you can’t have truth, you can’t have trust. This is part of the reason it is significantly worse, and that’s only the first generation contact. Finally, with what Randy said, voluntary is nice, we tried it in 2016 through 2018, and it didn’t work. So the question is, what can we do differently today because the harms are going to be significantly worse? And again, if you don’t have facts, you don’t have truth, you don’t have trust, you don’t have integrity of elections. And if you don’t have integrity of elections, you get statistics like what we have right now, VDEM in Sweden matches Freedom House in the United States, but VDEM pointed out that last year, 60% of the world is now under authoritarian rule. This year, that number went up to 72%. 2024 will be the tipping point of the world. Thank you, EU, for the laws that are coming out, but I joke and say it is the race of the turtles while the technology is coming out every two weeks, right? It’s agile development, we must move faster. I sit on the leadership panel of the Internet Governance Forum, so you can see how pushy I can get. We must move faster, and we cannot rely on the tech companies alone because their motive is profit. We must move. And again, thank you. The EU is putting in place things that frankly are still too late. 2024, look at the elections. Taiwan in January. It’s getting clobbered by disinformation from China right now. In February, Indonesia, the world’s largest Muslim population, the son-in-law, former President Suharto, is the frontrunner. So you may have a repeat of the Philippines. If elections were held today, you have the EU coming up, maybe Canada, the UK. You have, of course, the United States. Anyway, I can’t list all of them. It’s just we are there, folks. This is it. We are looking at the abyss. And not to mention the answer to that question, which is, who are the ones who are harmed the most, are the most vulnerable? The ones where institutions are weaker, the Global South. The ones who are first responders last, I’m at 42 seconds over. The last part is, misinformation is something that the tech lobby would like you to use. But you can definitely test for disinformation. We’re only a news organization, but we can tell the networks. We call them recidivist networks, if you’re doing counterterrorism. You know who they are. You can pull them down. And that just means that you will make a little less money.

Deborah Steele:
Thank you. Nick, you’re next. And just a reminder, the focus for this discussion, we move on to regulatory measures further on. But the focus for this is, what can people in minority groups, in groups that are discriminated against, what can they do to help protect themselves?

Nic Suzor:
So I’m going to try to be brief and reclaim some of my time that I went over in the last two responses. And I will stick to the question. I think the first thing we need to do if we’re thinking about where next is acknowledge, acknowledge, recognize that power matters. That any technology that is built and machine learning technology that learns from an existing system of hierarchy a world that is unequal will likely perpetuate and if it’s useful, often exacerbate those inequalities. So with that in mind, I think when we’re talking about responses, when we’re talking about how we can help marginalized users and vulnerable communities, I think one of the things that often gets lost in this debate is that acknowledgement, that it matters, it matters where the speakers are, who the speakers are, what sort of power they have, what sort of networks they have. It matters who the targets are, whether there is continual existing risk of exacerbating violence. Context matters, but this is something that’s been very, very difficult for tech companies, I think, to grapple with. When tech companies generally provide error rates for their machine learning classifiers, for example, they’re only high-level figures. And you can say, well, you can classify hateful content with a 98% accuracy and we can remove 100 million pieces of content before anyone even sees it. That actually doesn’t help address this major problem. If power matters, then it is incumbent on the tech companies to do more to proactively look at how their tools are being used against and by marginalized communities. This is one of the things that when we keep talking in terms that are neutral, when we disregard power, we lose on this point. I think it’s incredibly important that we continue, we know that vulnerable people experience a greater proportion of abuse and misinformation. We know that people from marginalized backgrounds have a harder time seeking a remedy and dealing with it. And given those things, why aren’t we doing more proactively to ensure that the systems that we’re building are built with historical inequality in mind, that the systems we are building have built-in safeguards against perpetuating hierarchy? I think it’s easy to get distracted on tools that people can use to help structure their own experience. I think autonomy is incredibly important, but I don’t want to see our solutions focus on putting the burden back on the people who are already marginalised. We need to build that in.

Deborah Steele:
Thank you. Vera, you’re over. What are some of the regulatory challenges to addressing these issues?

Vera Jourova:
I might simplify it to one only sentence, that the cure must not be worse than the disease. Which means that when we regulate the online space in the EU, we always have in mind that we have to keep as the main principle the freedom of speech. It means that it must not happen that in the near future, the uncomfortable opinions of others will become disinformation from the point of view of those who have the power at that moment. This must not happen. But I want to say, and I always start by speaking about disinformation, it has always been with us. But with the existence of internet and social media, it’s being spread in supersonic speed. And the intensity and the massive impact is really the matter of concern. Especially now, when we are close to… The war in Ukraine, in EU, the EU space is overflooded with Russian propaganda. And it’s always the same narrative. The aggressor says that he is the victim, and the victim is the aggressor. You will see, we will see a lot of it now regarding the horrible war starting in Israel. It will, we will see similar narrative. So I want to say that the disinformation being serving the aggressive regimes, it’s something we have to pay attention to. Coming back to the time of peace, regardless of the wars, we started in the EU to apply the code of practice against disinformation, which is a voluntary commitment of mainly big tech, which cooperate with us on increasing the fact-checking in all our member states. I’m still criticizing that they are not doing enough in the states, so with smaller states, with their languages. You mentioned, Madam Director, the Slovak elections. This was exactly the case when we saw insufficient fact-checking. So fact-checking is the core thing, but also in next, demonetizing the code of practice contains rules how to deprive the producers of the disinformation of the financial sources. So we engaged the advertising industry. Two more things next to the code of practice, which is mainly about the cooperation with the platforms. We are strengthening the independence and power of the media in the EU, which is key, because we want the people to have the places where they can get the facts. We are not touching opinions. We want the facts to be delivered to the people so that they can make their autonomous free choice. If the people want to believe, sorry, stupidities, it’s their right. But I think that… It’s our obligation to enable them to have the facts available. That’s why we are supporting independent media. And last but not least, and it’s what Professor mentioned, the media literacy, but it’s the long-distance run. I think that we cannot wait for the society to become resilient and become ready for the everyday confrontation with disinformation, but we have to also work in this direction. We have a lot of funding in the EU and a lot of projects with member states, but I know I am beyond time. Thank you.

Deborah Steele:
Maria Reza, your thoughts on the regulatory challenges?

Maria Ressa:
Yes. So I think the first is that, as the first panel pointed out, everything is about data today, right? Content is what we gravitate to, because that’s what we were used to in the old world, but really what we need is something that the DSA will now give us, real-time flow of data. And that data then will be able to show you the patterns and the trends. Once you have that, then you’ll be able to see the harms, right? Which, frankly, that data is available to the tech companies. So once there is oversight, you know, which is also what you do with news organizations when we used to be the gatekeepers, then civil society can come in and hold power to account. So the problem is not freedom of speech. Anyone can say anything on these platforms. The problem is freedom of reach. And I’m quoting a comedian, Sacha Baron Cohen, like from years ago, where he said, it is the distribution model that is the problem, right? Who says lies should spread faster than facts? Why do fact checkings not spread at all? And in order to be able to get it out to you, we have to create a whole civil society influencer marketing campaign for facts, which all of that then gives more money for the surveillance capitalism model. So you go back to that design. I actually was at the… Vatican with with 30 Nobel laureates and you know we told the Catholic Church isn’t lying against one of the Ten Commandments. Sorry I’m going to try to joke as it’s really difficult stuff. Anyway, so distribution and then finally the last part in terms of a regulatory framework, right? I think that there is a lot, I never use the word misinformation, I use disinformation. It is as it does Vera, right? We agree on this because it is the insidious manipulation in the Nobel lecture in 2021 I called it toxic sludge because it is laced with fear anger and hate and it will be worse with generative AI, right? Because in order to fight that that AI arms race you’re gonna need AI to fight AI. It is beyond our comprehension. We could talk about this more later in terms of how generative AI is now exponential exponential compared to the so what needs to happen? It’s really simple. There’s a huge lobby to try to convince you it is difficult. You don’t need to know how to build an airplane to regulate the air industry. You don’t need to know how to build this hall in order to put in place safety measures to make sure the building doesn’t fall down around us. So it’s a simple thing. We need transparency which is what the DSA is looking to give. I would suggest we move it beyond just academics because civil society, journalists, this is where we will look for accountability. And then the second thing is you know what is how do we define safety in the public sphere? Right? Because think about it like this. It’s almost like during the COVID if if there were no regulations in place for pharmaceutical companies, I gave vaccine A to that side of the room and then this side of the room I gave vaccine B. There’s no law against trying it out in the public sphere in real time, right? Oh vaccine A people so sorry you died. We have these examples globally. It’s dangerous. A toaster, Cambridge Analytica whistleblower said this, a toaster in order to get in your home has more safety regulations than what we carry with us everywhere we go. So stop the impunity. Transparency, accountability that means Roll out the code, but if there’s a harm that happens, you’re accountable.

Deborah Steele:
Thank you. Randy, Michelle, how do we become better producers and consumers, better prosumers?

Randi Michel:
I think the important thing here is that in today’s complex and rapidly evolving technological environment, as Maria just said, transparency is key. The public needs to be given the tools to understand what information is authentic and what is synthetic. And on this, I want to make three key points. First, I agree with Nick that technological solutions are not everything. They are not a panacea, but I think it’s really important to remember that they are a key element of this equation. And in that discussion, I think differentiating between falsehoods and synthetic content is really important. While falsehoods is a public conversation, as you alluded to, synthetic content is something that we can identify and label. And granted, there is still conversation needed about how exactly to define synthetic content, but that’s something we can work towards. The Biden-Harris administration believes that companies that are developing emerging technologies like AI have a responsibility to ensure that their products are safe, secure, and trustworthy, and the voluntary commitments are a key part of that. The second point I want to make is in response to your point about voluntary commitments. Certainly agree that is not sufficient, and that’s why the Biden-Harris administration is currently developing an executive order that will ensure the federal government is doing everything in its power to advance responsible AI and manage its risks to individuals as a society. And the administration will also pursue bipartisan legislation to help America lead the way in responsible innovation while mitigating risks or harms posed by this technology. And the third point I want to make is regarding the bottom-up approach that Nick was alluding to. And what is the second component of the technological solutions, the civil society aspect, the empowering communities component. Building resilience requires a bottom-up approach. At the second Summit for Democracy, the U.S. Agency for International Development and the State Department announced the Promoting Information Integrity and Resilience Initiative, which will enhance technical assistance and capacity building to local civil society organizations, media outlets, and governments around the world, providing the tools and training to develop effective identification and response measures. These three things together are key and are necessary combined to build the resilience throughout the world that we need to address these kinds of threats. Governments need to do our part to ensure the public can verify our authoritative government information, while at the same time the dynamic digital ecosystem will require continuously evolving technology, regulations, and international norms. It’s incumbent on all of us, governments, civil society, and the private sector, to work together to make sure that AI expands rather than diminishes access to authentic information.

Deborah Steele:
Maria, how do we become better producers and consumers in this age?

Maria Ressa:
I think this is, again, a whole-of-society approach, right? And let’s start with our industry, the news industry. We’ve lost our gatekeeping powers in many ways, and I would say that began in 2014. And one of the things we can do is begin to look at elections not as a horse race, but as critical to the survival of democracy, right? Critical to the survival of value. So I would say the first, I mean, where Rappler is working, for example, with OpenAI and working out how we can use OpenAI. I have a team with them in San Francisco now, looking at how we can use this new technology for the public sphere. How can we make it safer for democracy? The second part is that we need to tell our people how we are being manipulated. And we don’t do it enough. We’re fangirls and fanboys of the technology. We need to get beyond that and look at the entire impact. We’re at a different point in society. And frankly, at a different point from when the Internet Governance Forum was created many years ago. This is an inflection point. The third part would be, so more transparency, more accountability. We have a lot of values now that have come out. The AI, the blueprint for change of the White House that was done by Alondra Nelson. We have the OECD. We have the, today we’ll be unveiling hashtag the Internet we want from the leadership panel of the Internet Governance Forum. The values are there. We know where we need to go. Now we gotta get off our butts and operationalize this. Thank you, EU, for moving forward. But we have to do a lot more. And it’s great to hear what you’ve said. Oh my God, I have 56 seconds left. I’m sorry, I went over a little before. I would say the last part is civil society, where are you? We need to move from being users, users to citizens. We need to go back and define what civic engagement means in the age of exponential lies. I know this firsthand. I’ve been targeted by an average of 90 hate messages per hour. In order to keep doing my job, I had to be ready to go to jail for more than a century. I have to have Supreme Court approval to be here in front of you. But the time is now, we need a whole of society. approach and we need men and women who are going to stand up for the values we

Deborah Steele:
believe in. Thank you. Tatsuhiko Yamamoto, your thoughts on how we can be better producers and consumers? Yes, thank you. Well, as has been mentioned by many

Tatsuhiko Yamamoto:
speakers before, literacy is critical. And if I may use a metaphor, the act of eating information or ingesting information, you must be more aware of how you ingest information. And I often talk about information health. I’m promoting this concept of information health because when you eat food, I think people are more and more aware of what they ingest, what they eat. So we want to trace back to who produced this, with what materials, through what process, the ingredients. We are increasingly more aware about the safety check of the food that we eat. And maybe 50 or 100 years ago, we were not so aware of such food consequences. But with literacy, we are more and more aware of the safety of the food that we eat. And so we must take a balanced diet, and that’s something that we have learned over the years. And the same concept, I think, should also be adopted in the field of information consumption, that is information health, because I believe that we are eating a lot of data that’s really fettered with chemical additives. So the concept of information health, who, using what materials, produce this data through what processes? Do they use generative AI? I think we have to be more aware of this. Also, I talked about the filter bubble, and the fact rather than not eating a biased diet of data, we must eat a balanced diet of data. And so literacy is very important. Now, I am the head of an ITC literacy subcommittee with the MIC, Ministry of Internal Affairs and Communications of Japan, and we must communicate this kind of approach. And I also would like to take this concept of ICT literacy and information health to the WHO for discussion, if I can. And so I believe that we need to become increasingly aware of our health as we are exposed to such harmful data. And once we can proliferate this kind of an approach, and then we will be able to counter tech companies. who are attentive only to disseminating harmful information for their profit, and we’ll be able to, of course, make them exit the market, which will increase our transparency. And I believe that by so doing, we’ll be able to criticize such companies publicly, and that would lead to their restructuring or restructural change. And so I think we need literacy and also the concept of data or information health to become better prosumers.

Deborah Steele:
Thank you very much. An excellent concept, and I think we can all see the value of that proceeding. It’s time now for our ministerial high-level respondents. I would like to invite Mr. Nisa Patria, the Vice Minister of Communication and Information Technology from Indonesia. Thank you.

Nisa Patria:
Thank you. Thank you. Excellencies, distinguished speakers, and delegates, ladies and gentlemen, good afternoon. First of all, thank you for having me to be a part of this panel. Indonesia is very glad to be able to share our story in the hope of nurturing our collective effort in countering misinformation and disinformation. As we all know, with the increased internet usage, the distribution of false information to misdirect public opinion also increasing, especially with the emerging of generative AI. In 2021, the Indonesian Central Statistical Agency reported 62% of Indonesian internet users reported saying. information or content on social media or online news outlet that they believe to be false or dubious, many of them also doubting the accuracy of news they read in the social media. This is, of course, concerning since it might polarize our society, although Indonesia is noted as one of the countries that believes AI might bring positive impact to their livelihood. Responding to such situation as one of the world’s largest democracy in Southeast Asia, Indonesia has been very active in promoting effort to counter misinformation and disinformation. Through the Ministry of Communication and Informatics at the national level, we have developed a comprehensive strategy to counter misinformation and disinformation by establishing national digital literacy movement at the upstream level, debunking hoax at the intermediate level, as well as supporting law enforcement activities at the downstream level. On the regional level, ASEAN has adopted the ASEAN Guidelines on Management of Government Information in Combating Fake News and Disinformation in the Media. This framework acts as a roadmap for the member states to better identify, respond, and prevent the spread of false information. Against such backdrop, I stress the importance of countering misinformation and disinformation through following measures. First, we need our society to be more digitally literate, especially to be able to do pre-bunking of disinformation. Only through this way, we can better equip the people not to fall into being a victim of false information. We must not, I emphasize, we must not let the bleak history of when our society fell into the false information on vaccines and COVID-19 happened again. Second, admitting that our digital ecosystem can no longer rely on the economic that incentivizes the spread of misinformation and disinformation. We must do better in developing a governance that incentivizes productive, meaningful, and accurate information. Last but not least, we shall intensify our cooperative efforts to further technological adoption that is useful to counter disinformation and misinformation, especially in facing the emerging technologies such as generative AI that accelerates the generation of synthetic information on digital ecosystems. Australia believes through collaboration we can nurture our society to be more resilient on the emerging technologies that might threaten our society and our well-being. Thank you.

Deborah Steele:
Thank you very much. Next, Mr Paul Ash, Prime Minister’s Special Representative on Cyber and Digital in New Zealand. Thank you.

Paul Ash:
Kia ora tatou katoa, ngā mihi nui, kia koutou. Thank you for the opportunity to be here today. I don’t have a prepared statement. What I would like to do is just reflect a little of what I think I’ve just heard and posit some ideas about where we might go next. We’ve heard from the panel today some significant issues about the timing that we have to grapple with, with the fact that while mis- and disinformation, however you define them, was previously an issue, the rise of new AI systems has really sped that issue up and I think there’s a really important takeaway for us there in terms of how we focus on finding solutions together. The second thing we’ve heard is the disproportionate impact on affected communities, particularly women and girls, particularly members of the LGBTQI plus community, particularly refugees and migrant communities. And we’ve heard the importance of involving them in solutions from the very beginning of the process, difficult though that can be sometimes to find participatory environments that work well for them. I think we heard a clarion call from Maria Ressa and from others about the need for urgency. If the statistic that 72% of states are now in an authoritarian situation is correct, then out of the 193 member states of the United Nations, that means that there are 44 that are not. And of those, many of them are challenged at the moment by the impacts of disinformation and of influence operations on their institutions and societies. That means we’re starting from a really difficult place to start with. And I think we heard a range of different solutions posited, from regulation, as the vice president outlined, through to societal solutions, literacy, et cetera. But what I also think we heard was that none of those in and of themselves is actually going to be sufficient to solve this problem. We are going to need, I’m sure my Swiss friends will like this, a Swiss cheese of solutions that enables us through layers to solve the problem. Where does that lead to in terms of next steps? The first thing that leaps out to me is the need for pace. We do not have time to admire this problem. We actually need action on it quickly and carefully. The second, and it came out from this panel, but also elsewhere, is the need for influence, for leaders who will speak up for the values that we share and that are under threat as a consequence of disinformation. We also heard of the need for whole-of-society solutions, that citizens can participate in them, that actually governments can work alongside industry, civil society, the technical community. That phrase, where are you, civil society, was actually quite pertinent as this conversation works through from what is largely a government-centric perspective. We heard about the need for that conversation to be based on common values and principles. International human rights law needs to be at the foundation of that. So too, a free, open, and secure internet with as many more adjectives as we need to stick in front of the word internet to make that work. And finally, we heard about the need to operationalize those values in a construct that actually works. We have some experience of that in New Zealand with what happened after the attacks in Christchurch in March. 2019, and I’m delighted to see a full panel of Christchurch Call supporters in front of me, those who helped stand up with us against the scourge of violent extremism and terrorism and look for a truly multi-stakeholder response that brought all of those parties together and continues to do so in trying to find solutions. And I think that’s probably the lesson I take away from the panel I’ve heard and from my own experience in this area. This challenge of information integrity, and I use that phrase instead of disinformation and misinformation, the challenge of having confidence in the information environment in which we operate is perhaps the most urgent issue facing us amongst the stack of those involved in the discussions around frontier models, generative AI, foundational models, whatever you wish to call them, because it has the potential to undermine, if it’s not grappled with correctly and well, the very institutions that would enable us to govern all of the other issues that we’re going to need to grapple with in the AI environment. And it has the ability, if it’s not handled well, to undermine the human rights principles that underpin everything we do. But it also, as Maria Ressa and others have so eloquently articulated, is a step on the pathway from information integrity issues to radicalisation to violent extremism and terrorism. Those two issues are inextricably linked in an age of algorithmic amplification. I guess the one thing I’d say here, and it’s something that’s come through from this panel very loudly, and I expect we’ll hear it from civil society over the next two or three days. In closing, I would say one of my mentors taught me as we were doing the Christchurch call work, be less Westphalian in your response. The solution to this will not just be entirely generated by states. It needs to be a solution that industry participates in, because actually, longer term, beyond the next quarterly reports, they have as much of a stake in thriving democracies with human rights underpinning them as anybody else. Same with civil society, coming from a different perspective. And so I’d say as we tackle this together, let’s think really hard about the institutions we have and whether they’re fit for purpose. Our experience of multilateralism is that often it moves much, much more slowly than technology and we need to find a way to create truly collaborative, truly multi-stakeholder institutions that build sometimes from voluntary commitments into regulation from the ground up. Thank you very much for the expertise and wisdom from this panel. I hope that’s somewhat of a distillation of some things that we can take forward from here. Nga mihi nui ki a koutou.

Deborah Steele:
Excellent. Thank you very much. We’re going to have time now for a two-minute summary from each of our panellists. Two minutes only. Eyes on the clock. Nick, would you like to begin?

Nic Suzor:
Let’s go. I think we might be behind our quota on multi-stakeholderism. So multi-stakeholderism, multi-stakeholderism, multi-stakeholderism. This is really important, because I think we’ve got – there’s a tendency to look for a single solution here, and there is no single solution. There are limits to what governments can achieve. It’s not possible to make all of the harmful content that we’re talking about today illegal. Much of the harmful content that is spread, the harmful false content that is spread, that leads to bad outcomes, is perfectly lawful. So as much as we would like, government can’t be the sole arbiters here. Government also, well, are restrained because sometimes they’re the worst actors. 72% statistic? That sounds terrifying. I’m not sure. Well, I know that I really, I am very concerned about requests for removal and censorship of content that come from state actors trying to keep a hold of their power. I think that’s incredibly difficult and there’s a responsibility on private actors, on civil society to resist accordingly. There’s limitations to what, I don’t necessarily trust the private sector either. I certainly don’t trust the private sector. We’ve seen some progress in tech companies, but we also have to be really careful about other components. I don’t trust mainstream media to have the answer to this. Most of the disinformation that we see is amplified, picked up in a cycle of amplification by both mainstream media and social media. I think that makes it really difficult, means we have to work together.

Deborah Steele:
Thank you. Randy Michelle.

Randi Michel:
Thank you. I’d like to make four quick points to summarize what I’ve said earlier. First, it’s the importance of governments in implementing authentication and providence measures, and we hope to collaborate with the governments represented here in this room and around the world on building out a global norm on that issue. Second is the key role that technology companies play in providing transparency to their users that we try to encourage with the voluntary commitments. Third is the importance of engaging the civil society and the need for multi-stakeholder engagement as mentioned previously. And fourth, this is an issue that I think we haven’t talked about enough today, and so I really want to emphasize this, is the need to ensure that these efforts to advance transparency do not evolve into censorship or infringement on internet freedom. The best way to address disinformation… information is not by limiting content, but by disseminating accurate information. The growth of generative AI does not change that. We look forward to continuing to work with civil society, the private sector, and governments from around the world, from developing and developed countries alike, to address these evolving challenges in a way that upholds human rights and democratic freedoms. Thank you very much.

Deborah Steele:
Thank you. Maria.

Maria Ressa:
So I will also do three points for the two minutes that I have. So the first is the one I haven’t, I’ve been critical of tech companies, but we work with every one of the tech companies that are there because they must be at the table, right? They’re the only ones who have the power right now to stop the harms. They’re choosing not to, and this is where I’ll use an ASEAN phrase, please, tech companies, look at your business model, moderate your greed, because it is about enlightened self-interest. You do not want democracy to die. You do not want to harm people. It is not up to governments. And the second is to the governments. The governments are late to the game, but that’s also partly because we had to figure out what the tech was doing, right? We were one of the, I drank the Kool-Aid, you know, Rappler was created on Facebook. If Facebook had better search, I may not even have created the website. But having said that, governments now know the problem, please work faster. And in terms of speed and pace, you’re talking about agile development, which rolls out code every two weeks. So we must come up with systems of governance that can address the code, or you stop the actual rollout of the code, which then moves into business, or you make the companies responsible for what they roll out. And the third one is to citizens. This is it for us. The journalists are holding the line, but we can’t do this without you, so move from users to citizens. active citizens, this is the time.

Deborah Steele:
Vera.

Vera Jourova:
Yes, thank you. Three comments, hopefully I will manage in two minutes. First of all, we have to work together democracies of the world, because if democracies are not the rules makers and will become the rules takers, I think that we will we will fail something, something absolutely essential, existentially critical. So that’s why I am also happy that we can work within the G7 on the AI code of conduct and other things. Second thing, I spoke here about the regulation and Maria-Liza repeated that we are too slow still. Maybe in the EU we are we are a bit faster than somewhere else. But it must not be top down only. That’s why also in the EU, we believe in very strong involvement of the civil society, demanding citizens who do not want to be manipulated. We believe in strong, strong media. We believe in engagement of academic sphere, because we need to understand what’s happening and to to analyze and take then well-informed political decisions. Last thing, consumers versus citizens. I worked as a commissioner for five years to protect consumers, and it was mainly about the protection of, sorry, I have to say it’s stomachs and health not to be poisoned. And we did not invest so much in the people’s hearts and brains and souls not to be poisoned. And I think that now it’s time to do something more with the citizens and for the citizens. That’s why also we are preparing the regulation on transparency of political advertising online before the elections, because we need the citizens to understand what’s happening, not to get manipulated and not to become from individual citizens who should have free choice the easy to manipulate crowd, because it could happen very easily online. Thank you.

Deborah Steele:
Thank you. Would you like to share your summary?

Tatsuhiko Yamamoto:
Thank you very much. Thank you for making me a part of this. That this information is becoming very serious. But on one hand, we see some hope. So I think. I think there was a reference to hope, and I was able to mention this, because I think we are on the same page about understanding, and everyone knows that we have to take action, so there’s a consensus on this stage. And also practices. We have to operationalize or take action. Immediately, this is something very important. For that to happen, we need to have international collaboration and international dialogue like the one we are having now. And the platform companies, tech companies, need to be also at the table. This is quite important. Platform is expanding and becoming gigantic and very powerful. And I am a researcher on the Constitution. The Constitution is to control the power of governments, but digital constitutionalism is now emerging as a word. In other words, not just governments, but tech companies need to also be managed or controlled by some of the piece of legislation. So government or one country cannot confront with platform companies at this point in time. And therefore, international framework is needed so that we can have a dialogue with platform companies. And that’s what I believe. And also, one thing we have to focus is a structure, attention economy structure. If you look at each and every piece of phenomena that you are encountering, maybe you can’t really bring any solution to the total system, therefore we need to look at the ecosystem. But bringing solution to that is quite difficult, because we are going to make a new culture in the future. But I have a hope that we can do so together. So we would like to continue discussion and exchange opinions going forward. Thank you very much. Thank you very much.

Deborah Steele:
Thank you. And to complete Maria’s list of elections that we have in the next year, the world’s three biggest democracies are going to the polls. Indonesia, India, and the United States all have elections in the next 12 months. Please join me in thanking our exceptional panel. Nick Souza from the Metta Oversight Board, Maria Reza, 2021 Nobel Peace Prize Laureate, Ms. Vera Jourova from the European Commission, Ms. Randy Michelle from the White House National Security Council, and Mr. Tatsuhiko Yamamoto from Keio University. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Panel members, if you would just join me for an official photograph.

Deborah Steele

Speech speed

128 words per minute

Speech length

1551 words

Speech time

725 secs

Maria Ressa

Speech speed

177 words per minute

Speech length

2664 words

Speech time

903 secs

Nic Suzor

Speech speed

143 words per minute

Speech length

1853 words

Speech time

777 secs

Nisa Patria

Speech speed

120 words per minute

Speech length

515 words

Speech time

258 secs

Paul Ash

Speech speed

183 words per minute

Speech length

1127 words

Speech time

369 secs

Randi Michel

Speech speed

154 words per minute

Speech length

1354 words

Speech time

527 secs

Tatsuhiko Yamamoto

Speech speed

140 words per minute

Speech length

1877 words

Speech time

806 secs

Vera Jourova

Speech speed

143 words per minute

Speech length

2023 words

Speech time

847 secs