HIGH LEVEL LEADERS SESSION II

8 Oct 2023 02:15h - 03:45h UTC

Event report

Speakers and Moderators

Speakers
  • Maria Ressa, Journalist, Editor and Co-founder, Rappler, Philippines
  • Nicolas Suzor, Member, Oversight Board, Australia
  • Nezar Patria, Vice Minister, Ministry of Communication and Information Technology, Indonesia
  • Paul Ash, Prime Minister’s Special Representative on Cyber and Digital, New Zealand
  • Randi Michel, Director of Technology and Democracy, White House National Security Council
  • Tatsuhiko Yamamoto, Professor, Faculty of Law, Keio University
  • Vera Jourova, Vice President, European Commission for Values and Transparency
Moderators
  • Deborah Steele, Director of News, Asia-Pacific Broadcasting Union, Malaysia

 

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Deborah Steele, Director of News, Asia-Pacific Broadcasting Union, Malaysia

The analysis underscores the increasing prevalence and severe implications of misinformation and disinformation, fuelled largely by rapid developments in generative AI. This evolving technology is capable of creating synthetic content to such a complex level that it becomes nearly indistinguishable from authentic materials. This presents an enormous challenge in terms of responding to, and rooting out, misinformation and disinformation.

The situation is further complicated due to the structure of digital platforms, where algorithms dictate the type of content delivered to each user. Many users remain unacquainted with the concept of echo chambers or the algorithmic nature of their feeds. As a result, polarised information consumption is perpetuated, amplifying the dissemination of disinformation and leading to further division and misinformation.

To address these issues, the analysis suggests a holistic approach. This includes a substantial push towards enhancing media literacy. Moreover, it recommends strong political commitment to ensure the integrity of information-sharing systems, a task unquestionably challenging yet pressing in light of dwindling public trust in institutions.

Regulation also forms a crucial part of the solution, and there is a vital call for more comprehensive regulatory measures. Alongside this, technological interventions, such as advanced authentication tools, play a pivotal role. These techniques can help distinguish synthetic content from genuine, thereby mitigating some of the risks associated with generative AI.

This in-depth analysis connects with several Sustainable Development Goals (SDGs) — particularly, those concerned with industry, innovation, and infrastructure; reduced inequality; and peace, justice, and strong institutions. The study's findings stress the urgency of proactive action to counter misinformation and disinformation and contribute positively to these universal goals. Overall, this accentuates the critical intersection of technology and societal challenges, underscoring the importance of informed governance and policymaking in this sphere.

Vera Jourova, Vice President, European Commission for Values and Transparency

The European Union (EU) is taking strides to regulate generative artificial intelligence (AI) effectively and counteract disinformation's proliferation. These efforts are primarily aimed at high-risk AI applications, particularly deep fakes, which digitally manipulate videos or audio that portray individuals saying or doing things they did not. Without clear labelling or watermarking, such deceptive content has the potential to significantly influence public perception and inflict harm, notably in electoral processes.

Online harassment and hate speech, specifically those aimed at women, racial and ethnic minorities, and the LGBTQ+ community, are noticed to be increasing. This issue requires immediate attention. However, it involves the challenge of identifying and eliminating such malicious content without infringing upon the principle of freedom of speech, which the EU firmly upholds.

The EU, therefore, turns to legislation such as the Digital Services Act to obstruct illegal online content monetisation and forestall disinformation dissemination on the Internet. A legally binding act, it offers a robust structure for enforcement, complete with penalties for contraventions. Alongside this, the EU is also advocating for a directive against violence towards women, particularly emphasising digital violence.

Along with regulation, the EU acknowledges the role of technology companies and social media platforms in controlling disinformation. The EU urges these corporations to assume responsibility by boosting fact-checking capabilities and complying with a Code of Practice against disinformation— a series of voluntary commitments designed to combat misinformation online effectively.

Backing for independent media also comprises a crucial aspect of the EU's wider strategy. The EU prioritises the capability of media outlets to deliver factual information, thereby aiding citizens in making informed and free choices. EU policy supports these outlets, strengthening their capacity for accurate and autonomous reporting.

Within the fight against disinformation, media literacy is recognised as a long-term challenge. The EU heavily funds media literacy, working in close collaboration with member states on numerous projects designed to enhance the media literacy skills of the populace.

Furthermore, the EU emphasises the necessity for global democracies to cooperate in crafting international rules. It suggests that collaboration on G7 level could be an efficient way to address issues about the AI code of conduct. This global cooperation is perceived as a pivotal step in creating equitable standards across jurisdictions.

The EU's collaborative approach to regulation, involving civil society, local media and academia, underlines its commitment to balancing technological advancement with public welfare. EU emphasises the need for broader protection for citizens' minds and souls—informed by previous experiences—protecting consumers' rights and welfare.

Lastly, the EU anticipates stricter regulation for political advertising online. Transparency is the central theme here. Citizens should comprehend the content they consume and not fall for manipulative information. Transparency in online political advertising isn't simply in line with the broader endeavour to rein in disinformation; it is also considered crucial for promoting a healthier, more democratic society.

Randi Michel, Director of Technology and Democracy, White House National Security Council

Artificial Intelligence (AI) technologies have emerged as a proverbial double-edged sword, celebrated for bolstering innovation yet highly criticised for its exploitation in disseminating disinformation, thus threatening human rights, democracy, national security, and public safety.

In response to this grave concern, the US government emphasises the importance of transparency and public awareness on synthetic content. Efforts have resulted in securing commitments from 15 leading AI enterprises to advance responsible innovation, alongside the crucial development of guidelines, tools, and practices to authenticate digital content. These initiatives depict positive strides in mitigating false narratives.

The engagement of civil society, academia, and private sector is deemed imperative in addressing issues associated with AI-generated media. Strengthening local and global cooperation is key in safeguarding people from the adverse effects of fabricated or manipulated media. Meaningful dialogues are ongoing with top AI experts, consumer protection advocates and civil society organisations, reinforcing a collaborative approach.

On the technological front, tools are instrumental in waging a war against falsehoods. AI's role is monumental in identifying and labelling artificial content. However, voluntary commitments from AI companies are viewed as inadequate to circumvent associated risks, prompting the administration to formulate an executive order and advocate bipartisan legislation to guarantee responsible use of AI.

In the global context, a unanimous norm is deemed necessary to tackle the given issue, calling for technology companies to showcase greater transparency. Furthermore, multi-stakeholder engagement is identified as crucial, echoing calls for a collective effort. To build resilience, the US State Department announced the Promoting Information Integrity and Resilience initiative offering technical assistance to organisations and capacity-building to local governments and media outlets.

Summing up, while efforts are underway to enhance transparency and regulate AI misuse, there’s an explicate call to ensure these measures do not curb internet freedom or lead to censorship. The commitment to uphold human rights and democratic freedoms remains paramount. All in all, the analysis portrays a multifaceted issue necessitating engagement of multiple sectors and nations, responsible innovation, and the establishment of international norms, all whilst respecting individual freedoms and rights.

Nicolas Suzor, Member, Oversight Board, Australia

The analysis provides insightful revelations centred on complex topics such as content moderation, misinformation, AI innovation, synthetic media, and proposed solutions among others.

The task of content moderation and the authentication of media are confronting grave challenges in the era of rapidly advancing technology, especially with developments in AI and the emergence of synthetic media. The intricacies of this landscape are further cemented by the perplexing issue of making a distinction between misinformation and disinformation, necessitating a meticulous exploration of the contributors to the distribution of harmful false material. Evidence of this conundrum arises from misinformed ordinary users and malicious actors alike, both of whom may unknowingly foster the circulation of detrimental content.

Notwithstanding these challenges, advancements in AI and generative AI, initially perceived negatively due to their contribution to synthetic media's creation and dissemination, also hold many potentially beneficial applications. These technological improvements remain inherently neutral, suggesting their suitability for practical application in our progressively digitised age. However, it becomes increasingly crucial to mount a robust response against the potential repercussions of these innovations, particularly due to the difficulty in identifying and labelling AI-generated synthetic media.

Significant attention is garnered by the role of tech companies in regulating misinformation. The adaptability of these firms in combating misinformation, as evidenced during the Covid-19 pandemic, underpins their potential for positive impact. Nevertheless, these roles invite controversy, primarily owing to the firms' reliance on technical responses rather than human-centred strategies.

Regulation of false information presents a complex challenge, particularly around distinguishing between parody, satire, and acceptable speech. These complexities underscore the inherent challenges within the legal and regulatory framework in addressing this issue. As such, the Oversight Board, an emerging platform for discussing content-related issues, is viewed as a promising solution, notably given its active involvement in cases pertaining to digitally manipulated media.

Amidst these technology-driven changes, there is a need for focused attention towards marginalised communities. Technology risks perpetuating existing inequalities, and there is a call for tech firms to proactively implement measures. The development of comprehensive system safeguards and protections is strongly advocated, noting that vulnerable individuals often bear the brunt of misinformation and abuse.

Multistakeholderism's importance in governance is accentuated, emphasising that no single solution suffices for the pervasive problem of harmful content. Despite their limitations—including censorship utilised to maintain power—state governments' role is sizeable, making a considerable proportion of total content removal requests. Thus, civil society's active role is critical in resisting state-imposed censorship.

In summary, these findings offer a comprehensive perspective of the challenges and potential solutions confronting content moderation, misinformation, and the role of AI in our progressively digital world.

Maria Ressa, Journalist, Editor and Co-Founder, Rappler, Philippines

The comprehensive review reveals serious concerns about the negative impacts of Generative AI, which reportedly leads to the weaponisation of human emotions, more specifically fear, anger, and hatred. Allegedly, the first human encounter with AI fostered this proliferation of negative emotions. Furthermore, Generative AI is linked to triggering an epidemic of loneliness and isolation, with glaring evidence drawn from instances of suicide where the detrimental influence of AI has been explicitly cited.

Although demonstrating an appreciation for technological innovations, the identified sentiment underscores the necessity for responsible application of technology. The analysis further outlines the imperative need for robust regulation and enhanced public protection against the misuse of AI. Tech start-ups are highlighted for falsely propagating AI as a preferable companion, a misleading and dangerous perspective intensifying its misuse.

Detrimental impacts of technology extend prominently into the realm of disinformation. A disturbing revelation from MIT posits that lies spread six times faster than the truth, seemingly by design. This problem is particularly exacerbated on mainstream social platforms like Twitter and Facebook, where safety measures to curb unchecked dissemination of misinformation appear to be in rollback.

The predicament becomes more severe in the context of the Global South, where populations are more susceptible to misinformation due to weaker institutional structures. Despite voluntary attempts to curb disinformation implemented between 2016 and 2018, these measures have failed, intensifying the call for more robust regulations.

However, the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence amidst the rapidly evolving technological landscape. Tech platforms are nudged towards accepting responsibility for the harm they cause and moving towards enhanced transparency and accountability. The newly introduced Digital Services Act (DSA) which provides real-time data, revealing potential harms is recognised as a progressive step towards adjusting to a world now revolving around data.

The analysis furthermore exposes a collective call for citizens to redefine their engagement in the digital era, moving from being mere users to active role players. This sentiment resonates strongly with Maria Ressa's personal experience of being targeted by hate messages, threats, and legal battles.

Tech companies are urged throughout the discourse to moderate their greed, reconsider their business models, and take definitive actions to counteract digital harm. The analysis culminates on a hopeful note, emphasising the role of governments, now aware of the challenges posed by the tech industry and the urgency of accelerating their response accordingly.

In essence, the extensive summary reflects a critical evaluation of technological progress, a vital emphasis on the necessity for ethical standards, and enhances the vital role of governments, tech companies, and active citizen participation in managing the myriad challenges posed by the evolving digital landscape.

Tatsuhiko Yamamoto, Professor, Faculty of Law, Keio University

Generative AI technologies have emerged as key contributors to the propagation of misinformation and disinformation in our society. These systems, by virtue of their capabilities and the considerable amount of false content they can produce instantaneously, hold the potential to interfere with people's ability to make independent decisions and disseminate biased or incorrect information on a grand scale. The examined findings also underscore the potential for societal 'hallucinations' or mass deceptions triggered by the misuse of generative AI. However, these issues are complex and call for multidimensional and intricate solutions, particularly in the face of the potential 'tsunami of disinformation' that generative AI can engender.

The attention economy model further intensifies the challenges posed by misinformation and disinformation. This refers to an environment where a plethora of information is constantly vying for limited user attention, with misinformation and disinformation often securing victory due to their heightened engagement appeal. This dynamic spearheads the propagation of false or misleading content, thereby providing a fertile landscape for its growth and influence.

Tech companies, seen as pioneers in this digital age, have a significant role to play in mitigating this disinformation crisis. Indeed, there is a pressing call for these corporations to bolster their commitment to combat false information, featuring reports from fact-checking organisations more prominently on their platforms. Strengthening corporate responsibility, coupled with enhanced collaboration in fact-checking efforts, marks the way forward. Yahoo Japan's alliance with Japan Fact-Check Center is one such successful precedent in combating the spread of misleading content.

Importantly, there is a universal agreement on perceiving misinformation as a structural issue requiring targeted redressal at its very roots. This necessitates an enhancement in data literacy and the development of technological standardisation. The expanding power of tech companies, becoming increasingly domineering, has been identified as a critical area necessitating immediate action. 'Digital constitutionalism' emerges as a novel regulatory concept offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.

In addition to technology-centred solutions, the importance of 'information health' is stressed, advocating for a balanced and unbiased intake of data likened to maintaining nutritional balance in food consumption. Literacy heightens its significance as an essential tool in combating the misinformation epidemic. An improved awareness and understanding of data intake could instigate a consequential transformation within tech companies known to disseminate harmful or misleading information for profit.

Nezar Patria, Vice Minister, Ministry of Communication and Information Technology, Indonesia

The digital landscape is exhibiting a worrying trend with the surge in internet usage and AI technologies playing a significant role in heightening the spread of misinformation and disinformation. An alarming 62% of internet users in Indonesia have encountered false or questionable online content, casting doubt on the trustworthiness of news circulated via social media platforms.

In response to these concerns, Indonesia has devised an all-encompassing strategy. Central to this plan is a national digital literacy movement aimed at educating users and fostering greater discernment towards online information. An intermediate level procedure has been introduced to debunk hoaxes, ensuring false assertions are challenged and the truth prevails. At the downstream level, they have intensified law enforcement activities, holding those propagating misinformation accountable for their actions. This comprehensive strategy presents a benchmark for other nations grappling with misinformation and disinformation.

There is a broad agreement on the need for improving digital literacy worldwide. By equipping individuals with the skills to effectively differentiate between true and false in online contents, societies can proactively tackle disinformation, a process known as "pre-bunking". Furthermore, the current circumstances call for a revised governance structure that rewards sharing accurate information, aiming to dismantle the appeal of spreading false news.

Moreover, the call for collaborative efforts to consistently counter misinformation and disinformation is unequivocal. Especially in the face of emerging technologies such as generative AI, global efforts to embrace such advancements offer a significant defence in this digital battle. To summarise, it's clear that countering misinformation and disinformation calls for a multifaceted approach that includes education, rigorous enforcement, governance alterations, and international cooperation.

Paul Ash, Prime Minister’s Special Representative on Cyber and Digital, New Zealand

The analysis underscores a significant and urgent need to address disinformation issues, which have been exponentially fuelled by advancements in AI (Artificial Intelligence). Such disinformation's burgeoning prevalence is seen as a potential threat to democratic infrastructures and institutions and can also undermine essential human rights. These concerns directly pertain to SDG (Sustainable Development Goal) 16, which promotes Peace, Justice and Strong Institutions.

In this context, the analysis conveys a strong sentiment favouring the development of all-embracing solutions. These solutions must integrate governments, various industry sectors, and notably, civil society. There is a key emphasis on engaging communities impacted by disinformation from response formulation's initial stages. This inclusive approach aligns with SDG 16's broader aspiration for peace and justice.

The analysis consensus is that international human rights law should be the foundation of measures taken. This reflects the sentiment that human rights principles underpin everything and that compromises on information integrity could potentially undermine them.

Moreover, the analysis advocates for a value-led institutional response involving multiple stakeholders. This ethos aligns with SDG 17, which calls for global Partnerships for the Goals. Whilst recognising that multilateral action often lags behind technological advancement's rapid pace, the captured sentiment supports a unique perspective. The conclusion drawn suggests a gradual progress from voluntary commitments towards a formal, ground-up regulatory framework.

The experience gleaned from the Christchurch Call incident, notably highlighted in the analysis, stresses the need for an authentic multi-stakeholder response. This point underlines the essential role of comprehensive cooperation and collaboration in effectively addressing the global disinformation challenge.

In summary, the analysis portrays the pressing necessity to confront disinformation proactively, harnessing the principles of SDG 16 and SDG 17 in formulating solutions. It advocates for a cooperative approach encompassing all society sectors and honouring international human rights laws. Despite inherent challenges, it encourages forward momentum from voluntary action towards regulatory norms, propelling us closer to a digital world characterised by peace, justice, and robust institutions.

Speakers

DS

Deborah Steele

Speech speed

128 words per minute

Speech length

1551 words

Speech time

725 secs

Click for more

MR

Maria Ressa

Speech speed

177 words per minute

Speech length

2664 words

Speech time

903 secs

Click for more

NS

Nicolas Suzor

Speech speed

143 words per minute

Speech length

1853 words

Speech time

777 secs

Click for more

NP

Nezar Patria

Speech speed

120 words per minute

Speech length

515 words

Speech time

258 secs

Click for more

PA

Paul Ash

Speech speed

183 words per minute

Speech length

1127 words

Speech time

369 secs

Click for more

RM

Randi Michel

Speech speed

154 words per minute

Speech length

1354 words

Speech time

527 secs

Click for more

TY

Tatsuhiko Yamamoto

Speech speed

140 words per minute

Speech length

1877 words

Speech time

806 secs

Click for more

VJ

Vera Jourova

Speech speed

143 words per minute

Speech length

2023 words

Speech time

847 secs

Click for more