HIGH LEVEL LEADERS SESSION II
8 Oct 2023 02:15h - 03:45h UTC
Event report
Speakers and Moderators
Speakers
- Maria Ressa, Journalist, Editor and Co-founder, Rappler, Philippines
- Nicolas Suzor, Member, Oversight Board, Australia
- Nezar Patria, Vice Minister, Ministry of Communication and Information Technology, Indonesia
- Paul Ash, Prime Minister’s Special Representative on Cyber and Digital, New Zealand
- Randi Michel, Director of Technology and Democracy, White House National Security Council
- Tatsuhiko Yamamoto, Professor, Faculty of Law, Keio University
- Vera Jourova, Vice President, European Commission for Values and Transparency
Moderators
- Deborah Steele, Director of News, Asia-Pacific Broadcasting Union, Malaysia
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Deborah Steele, Director of News, Asia-Pacific Broadcasting Union, Malaysia
The analysis underscores the increasing prevalence and severe implications of misinformation and disinformation, fuelled largely by rapid developments in generative AI. This evolving technology is capable of creating synthetic content to such a complex level that it becomes nearly indistinguishable from authentic materials. This presents an enormous challenge in terms of responding to, and rooting out, misinformation and disinformation.
The situation is further complicated due to the structure of digital platforms, where algorithms dictate the type of content delivered to each user. Many users remain unacquainted with the concept of echo chambers or the algorithmic nature of their feeds. As a result, polarised information consumption is perpetuated, amplifying the dissemination of disinformation and leading to further division and misinformation.
To address these issues, the analysis suggests a holistic approach. This includes a substantial push towards enhancing media literacy. Moreover, it recommends strong political commitment to ensure the integrity of information-sharing systems, a task unquestionably challenging yet pressing in light of dwindling public trust in institutions.
Regulation also forms a crucial part of the solution, and there is a vital call for more comprehensive regulatory measures. Alongside this, technological interventions, such as advanced authentication tools, play a pivotal role. These techniques can help distinguish synthetic content from genuine, thereby mitigating some of the risks associated with generative AI.
This in-depth analysis connects with several Sustainable Development Goals (SDGs) — particularly, those concerned with industry, innovation, and infrastructure; reduced inequality; and peace, justice, and strong institutions. The study's findings stress the urgency of proactive action to counter misinformation and disinformation and contribute positively to these universal goals. Overall, this accentuates the critical intersection of technology and societal challenges, underscoring the importance of informed governance and policymaking in this sphere.
Vera Jourova, Vice President, European Commission for Values and Transparency
The European Union (EU) is taking strides to regulate generative artificial intelligence (AI) effectively and counteract disinformation's proliferation. These efforts are primarily aimed at high-risk AI applications, particularly deep fakes, which digitally manipulate videos or audio that portray individuals saying or doing things they did not. Without clear labelling or watermarking, such deceptive content has the potential to significantly influence public perception and inflict harm, notably in electoral processes.
Online harassment and hate speech, specifically those aimed at women, racial and ethnic minorities, and the LGBTQ+ community, are noticed to be increasing. This issue requires immediate attention. However, it involves the challenge of identifying and eliminating such malicious content without infringing upon the principle of freedom of speech, which the EU firmly upholds.
The EU, therefore, turns to legislation such as the Digital Services Act to obstruct illegal online content monetisation and forestall disinformation dissemination on the Internet. A legally binding act, it offers a robust structure for enforcement, complete with penalties for contraventions. Alongside this, the EU is also advocating for a directive against violence towards women, particularly emphasising digital violence.
Along with regulation, the EU acknowledges the role of technology companies and social media platforms in controlling disinformation. The EU urges these corporations to assume responsibility by boosting fact-checking capabilities and complying with a Code of Practice against disinformation— a series of voluntary commitments designed to combat misinformation online effectively.
Backing for independent media also comprises a crucial aspect of the EU's wider strategy. The EU prioritises the capability of media outlets to deliver factual information, thereby aiding citizens in making informed and free choices. EU policy supports these outlets, strengthening their capacity for accurate and autonomous reporting.
Within the fight against disinformation, media literacy is recognised as a long-term challenge. The EU heavily funds media literacy, working in close collaboration with member states on numerous projects designed to enhance the media literacy skills of the populace.
Furthermore, the EU emphasises the necessity for global democracies to cooperate in crafting international rules. It suggests that collaboration on G7 level could be an efficient way to address issues about the AI code of conduct. This global cooperation is perceived as a pivotal step in creating equitable standards across jurisdictions.
The EU's collaborative approach to regulation, involving civil society, local media and academia, underlines its commitment to balancing technological advancement with public welfare. EU emphasises the need for broader protection for citizens' minds and souls—informed by previous experiences—protecting consumers' rights and welfare.
Lastly, the EU anticipates stricter regulation for political advertising online. Transparency is the central theme here. Citizens should comprehend the content they consume and not fall for manipulative information. Transparency in online political advertising isn't simply in line with the broader endeavour to rein in disinformation; it is also considered crucial for promoting a healthier, more democratic society.
Randi Michel, Director of Technology and Democracy, White House National Security Council
Artificial Intelligence (AI) technologies have emerged as a proverbial double-edged sword, celebrated for bolstering innovation yet highly criticised for its exploitation in disseminating disinformation, thus threatening human rights, democracy, national security, and public safety.
In response to this grave concern, the US government emphasises the importance of transparency and public awareness on synthetic content. Efforts have resulted in securing commitments from 15 leading AI enterprises to advance responsible innovation, alongside the crucial development of guidelines, tools, and practices to authenticate digital content. These initiatives depict positive strides in mitigating false narratives.
The engagement of civil society, academia, and private sector is deemed imperative in addressing issues associated with AI-generated media. Strengthening local and global cooperation is key in safeguarding people from the adverse effects of fabricated or manipulated media. Meaningful dialogues are ongoing with top AI experts, consumer protection advocates and civil society organisations, reinforcing a collaborative approach.
On the technological front, tools are instrumental in waging a war against falsehoods. AI's role is monumental in identifying and labelling artificial content. However, voluntary commitments from AI companies are viewed as inadequate to circumvent associated risks, prompting the administration to formulate an executive order and advocate bipartisan legislation to guarantee responsible use of AI.
In the global context, a unanimous norm is deemed necessary to tackle the given issue, calling for technology companies to showcase greater transparency. Furthermore, multi-stakeholder engagement is identified as crucial, echoing calls for a collective effort. To build resilience, the US State Department announced the Promoting Information Integrity and Resilience initiative offering technical assistance to organisations and capacity-building to local governments and media outlets.
Summing up, while efforts are underway to enhance transparency and regulate AI misuse, there’s an explicate call to ensure these measures do not curb internet freedom or lead to censorship. The commitment to uphold human rights and democratic freedoms remains paramount. All in all, the analysis portrays a multifaceted issue necessitating engagement of multiple sectors and nations, responsible innovation, and the establishment of international norms, all whilst respecting individual freedoms and rights.
Nicolas Suzor, Member, Oversight Board, Australia
The analysis provides insightful revelations centred on complex topics such as content moderation, misinformation, AI innovation, synthetic media, and proposed solutions among others.
The task of content moderation and the authentication of media are confronting grave challenges in the era of rapidly advancing technology, especially with developments in AI and the emergence of synthetic media. The intricacies of this landscape are further cemented by the perplexing issue of making a distinction between misinformation and disinformation, necessitating a meticulous exploration of the contributors to the distribution of harmful false material. Evidence of this conundrum arises from misinformed ordinary users and malicious actors alike, both of whom may unknowingly foster the circulation of detrimental content.
Notwithstanding these challenges, advancements in AI and generative AI, initially perceived negatively due to their contribution to synthetic media's creation and dissemination, also hold many potentially beneficial applications. These technological improvements remain inherently neutral, suggesting their suitability for practical application in our progressively digitised age. However, it becomes increasingly crucial to mount a robust response against the potential repercussions of these innovations, particularly due to the difficulty in identifying and labelling AI-generated synthetic media.
Significant attention is garnered by the role of tech companies in regulating misinformation. The adaptability of these firms in combating misinformation, as evidenced during the Covid-19 pandemic, underpins their potential for positive impact. Nevertheless, these roles invite controversy, primarily owing to the firms' reliance on technical responses rather than human-centred strategies.
Regulation of false information presents a complex challenge, particularly around distinguishing between parody, satire, and acceptable speech. These complexities underscore the inherent challenges within the legal and regulatory framework in addressing this issue. As such, the Oversight Board, an emerging platform for discussing content-related issues, is viewed as a promising solution, notably given its active involvement in cases pertaining to digitally manipulated media.
Amidst these technology-driven changes, there is a need for focused attention towards marginalised communities. Technology risks perpetuating existing inequalities, and there is a call for tech firms to proactively implement measures. The development of comprehensive system safeguards and protections is strongly advocated, noting that vulnerable individuals often bear the brunt of misinformation and abuse.
Multistakeholderism's importance in governance is accentuated, emphasising that no single solution suffices for the pervasive problem of harmful content. Despite their limitations—including censorship utilised to maintain power—state governments' role is sizeable, making a considerable proportion of total content removal requests. Thus, civil society's active role is critical in resisting state-imposed censorship.
In summary, these findings offer a comprehensive perspective of the challenges and potential solutions confronting content moderation, misinformation, and the role of AI in our progressively digital world.
Maria Ressa, Journalist, Editor and Co-Founder, Rappler, Philippines
The comprehensive review reveals serious concerns about the negative impacts of Generative AI, which reportedly leads to the weaponisation of human emotions, more specifically fear, anger, and hatred. Allegedly, the first human encounter with AI fostered this proliferation of negative emotions. Furthermore, Generative AI is linked to triggering an epidemic of loneliness and isolation, with glaring evidence drawn from instances of suicide where the detrimental influence of AI has been explicitly cited.
Although demonstrating an appreciation for technological innovations, the identified sentiment underscores the necessity for responsible application of technology. The analysis further outlines the imperative need for robust regulation and enhanced public protection against the misuse of AI. Tech start-ups are highlighted for falsely propagating AI as a preferable companion, a misleading and dangerous perspective intensifying its misuse.
Detrimental impacts of technology extend prominently into the realm of disinformation. A disturbing revelation from MIT posits that lies spread six times faster than the truth, seemingly by design. This problem is particularly exacerbated on mainstream social platforms like Twitter and Facebook, where safety measures to curb unchecked dissemination of misinformation appear to be in rollback.
The predicament becomes more severe in the context of the Global South, where populations are more susceptible to misinformation due to weaker institutional structures. Despite voluntary attempts to curb disinformation implemented between 2016 and 2018, these measures have failed, intensifying the call for more robust regulations.
However, the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence amidst the rapidly evolving technological landscape. Tech platforms are nudged towards accepting responsibility for the harm they cause and moving towards enhanced transparency and accountability. The newly introduced Digital Services Act (DSA) which provides real-time data, revealing potential harms is recognised as a progressive step towards adjusting to a world now revolving around data.
The analysis furthermore exposes a collective call for citizens to redefine their engagement in the digital era, moving from being mere users to active role players. This sentiment resonates strongly with Maria Ressa's personal experience of being targeted by hate messages, threats, and legal battles.
Tech companies are urged throughout the discourse to moderate their greed, reconsider their business models, and take definitive actions to counteract digital harm. The analysis culminates on a hopeful note, emphasising the role of governments, now aware of the challenges posed by the tech industry and the urgency of accelerating their response accordingly.
In essence, the extensive summary reflects a critical evaluation of technological progress, a vital emphasis on the necessity for ethical standards, and enhances the vital role of governments, tech companies, and active citizen participation in managing the myriad challenges posed by the evolving digital landscape.
Tatsuhiko Yamamoto, Professor, Faculty of Law, Keio University
Generative AI technologies have emerged as key contributors to the propagation of misinformation and disinformation in our society. These systems, by virtue of their capabilities and the considerable amount of false content they can produce instantaneously, hold the potential to interfere with people's ability to make independent decisions and disseminate biased or incorrect information on a grand scale. The examined findings also underscore the potential for societal 'hallucinations' or mass deceptions triggered by the misuse of generative AI. However, these issues are complex and call for multidimensional and intricate solutions, particularly in the face of the potential 'tsunami of disinformation' that generative AI can engender.
The attention economy model further intensifies the challenges posed by misinformation and disinformation. This refers to an environment where a plethora of information is constantly vying for limited user attention, with misinformation and disinformation often securing victory due to their heightened engagement appeal. This dynamic spearheads the propagation of false or misleading content, thereby providing a fertile landscape for its growth and influence.
Tech companies, seen as pioneers in this digital age, have a significant role to play in mitigating this disinformation crisis. Indeed, there is a pressing call for these corporations to bolster their commitment to combat false information, featuring reports from fact-checking organisations more prominently on their platforms. Strengthening corporate responsibility, coupled with enhanced collaboration in fact-checking efforts, marks the way forward. Yahoo Japan's alliance with Japan Fact-Check Center is one such successful precedent in combating the spread of misleading content.
Importantly, there is a universal agreement on perceiving misinformation as a structural issue requiring targeted redressal at its very roots. This necessitates an enhancement in data literacy and the development of technological standardisation. The expanding power of tech companies, becoming increasingly domineering, has been identified as a critical area necessitating immediate action. 'Digital constitutionalism' emerges as a novel regulatory concept offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.
In addition to technology-centred solutions, the importance of 'information health' is stressed, advocating for a balanced and unbiased intake of data likened to maintaining nutritional balance in food consumption. Literacy heightens its significance as an essential tool in combating the misinformation epidemic. An improved awareness and understanding of data intake could instigate a consequential transformation within tech companies known to disseminate harmful or misleading information for profit.
Nezar Patria, Vice Minister, Ministry of Communication and Information Technology, Indonesia
The digital landscape is exhibiting a worrying trend with the surge in internet usage and AI technologies playing a significant role in heightening the spread of misinformation and disinformation. An alarming 62% of internet users in Indonesia have encountered false or questionable online content, casting doubt on the trustworthiness of news circulated via social media platforms.
In response to these concerns, Indonesia has devised an all-encompassing strategy. Central to this plan is a national digital literacy movement aimed at educating users and fostering greater discernment towards online information. An intermediate level procedure has been introduced to debunk hoaxes, ensuring false assertions are challenged and the truth prevails. At the downstream level, they have intensified law enforcement activities, holding those propagating misinformation accountable for their actions. This comprehensive strategy presents a benchmark for other nations grappling with misinformation and disinformation.
There is a broad agreement on the need for improving digital literacy worldwide. By equipping individuals with the skills to effectively differentiate between true and false in online contents, societies can proactively tackle disinformation, a process known as "pre-bunking". Furthermore, the current circumstances call for a revised governance structure that rewards sharing accurate information, aiming to dismantle the appeal of spreading false news.
Moreover, the call for collaborative efforts to consistently counter misinformation and disinformation is unequivocal. Especially in the face of emerging technologies such as generative AI, global efforts to embrace such advancements offer a significant defence in this digital battle. To summarise, it's clear that countering misinformation and disinformation calls for a multifaceted approach that includes education, rigorous enforcement, governance alterations, and international cooperation.
Paul Ash, Prime Minister’s Special Representative on Cyber and Digital, New Zealand
The analysis underscores a significant and urgent need to address disinformation issues, which have been exponentially fuelled by advancements in AI (Artificial Intelligence). Such disinformation's burgeoning prevalence is seen as a potential threat to democratic infrastructures and institutions and can also undermine essential human rights. These concerns directly pertain to SDG (Sustainable Development Goal) 16, which promotes Peace, Justice and Strong Institutions.
In this context, the analysis conveys a strong sentiment favouring the development of all-embracing solutions. These solutions must integrate governments, various industry sectors, and notably, civil society. There is a key emphasis on engaging communities impacted by disinformation from response formulation's initial stages. This inclusive approach aligns with SDG 16's broader aspiration for peace and justice.
The analysis consensus is that international human rights law should be the foundation of measures taken. This reflects the sentiment that human rights principles underpin everything and that compromises on information integrity could potentially undermine them.
Moreover, the analysis advocates for a value-led institutional response involving multiple stakeholders. This ethos aligns with SDG 17, which calls for global Partnerships for the Goals. Whilst recognising that multilateral action often lags behind technological advancement's rapid pace, the captured sentiment supports a unique perspective. The conclusion drawn suggests a gradual progress from voluntary commitments towards a formal, ground-up regulatory framework.
The experience gleaned from the Christchurch Call incident, notably highlighted in the analysis, stresses the need for an authentic multi-stakeholder response. This point underlines the essential role of comprehensive cooperation and collaboration in effectively addressing the global disinformation challenge.
In summary, the analysis portrays the pressing necessity to confront disinformation proactively, harnessing the principles of SDG 16 and SDG 17 in formulating solutions. It advocates for a cooperative approach encompassing all society sectors and honouring international human rights laws. Despite inherent challenges, it encourages forward momentum from voluntary action towards regulatory norms, propelling us closer to a digital world characterised by peace, justice, and robust institutions.
Speakers
DS
Deborah Steele
Speech speed
128 words per minute
Speech length
1551 words
Speech time
725 secs
Arguments
Misinformation and disinformation are major challenges, with generative AI posing a significant risk factor
Supporting facts:
- New developments in generative AI almost every day
- Enormous challenge to respond to misinformation and disinformation
- Generative AI can create synthetic content that is difficult to distinguish from real
Topics: Misinformation, Disinformation, Generative AI, Synthetic content
Addressing misinformation and disinformation requires a multi-pronged approach
Supporting facts:
- There's a need to improve media literacy
- Political commitment required to protect integrity of information sharing systems
- There's a call for regulatory measures and technological interventions
Topics: Regulation, Media literacy, Public trust, Authentication tools
Report
The analysis underscores the increasing prevalence and severe implications of misinformation and disinformation, fuelled largely by rapid developments in generative AI. This evolving technology is capable of creating synthetic content to such a complex level that it becomes nearly indistinguishable from authentic materials.
This presents an enormous challenge in terms of responding to, and rooting out, misinformation and disinformation. The situation is further complicated due to the structure of digital platforms, where algorithms dictate the type of content delivered to each user. Many users remain unacquainted with the concept of echo chambers or the algorithmic nature of their feeds.
As a result, polarised information consumption is perpetuated, amplifying the dissemination of disinformation and leading to further division and misinformation. To address these issues, the analysis suggests a holistic approach. This includes a substantial push towards enhancing media literacy. Moreover, it recommends strong political commitment to ensure the integrity of information-sharing systems, a task unquestionably challenging yet pressing in light of dwindling public trust in institutions.
Regulation also forms a crucial part of the solution, and there is a vital call for more comprehensive regulatory measures. Alongside this, technological interventions, such as advanced authentication tools, play a pivotal role. These techniques can help distinguish synthetic content from genuine, thereby mitigating some of the risks associated with generative AI.
This in-depth analysis connects with several Sustainable Development Goals (SDGs) — particularly, those concerned with industry, innovation, and infrastructure; reduced inequality; and peace, justice, and strong institutions. The study's findings stress the urgency of proactive action to counter misinformation and disinformation and contribute positively to these universal goals.
Overall, this accentuates the critical intersection of technology and societal challenges, underscoring the importance of informed governance and policymaking in this sphere.
MR
Maria Ressa
Speech speed
177 words per minute
Speech length
2664 words
Speech time
903 secs
Arguments
Generative AI leads to the weaponization of human emotions, notably fear, anger and hate
Supporting facts:
- The first contact of humans with AI led to the weaponization of fear, anger, and hate
- Tribalism or the us versus them mentality has been noticed
Topics: Generative AI, Weaponization of Emotion, Social Media
Generative AI incites loneliness and isolation
Supporting facts:
- There have been cases of people committing suicide citing influence of AI
- Generative AI has been linked to an epidemic of loneliness
- There are lawsuits against the misuse of AI
Topics: Generative AI, Loneliness, Isolation
It is, by design, meant to spread lies faster than the facts
Supporting facts:
- MIT said lies spread six times faster
- All safety measures have been rolled back on platforms like Twitter and Facebook
Topics: Disinformation, Technology, Social Media
Voluntary measures to control disinformation did not work
Supporting facts:
- Voluntary measures were tried in 2016 through 2018, but they didn't work
Topics: Disinformation, Regulations, Social Media
Most vulnerable to misinformation are individuals in the Global South with weaker institutions
Topics: Global South, Disinformation, Vulnerable Groups
The current regulatory landscape needs to adapt to a world where data and its flow is central
Supporting facts:
- The DSA (Digital Services Act) provides real-time data which allows to see patterns and trends, revealing potential harms
Topics: Regulation, Data Protection, DSA
Freedom of speech is not the problem, but freedom of reach that makes false information spread faster than facts is an issue
Supporting facts:
- Quote by Sacha Baron Cohen refers to problematic distribution model on social media platforms
Topics: Disinformation, Social Media Platforms, Regulation
AI will exacerbate issues of manipulation and misinformation
Supporting facts:
- Predicted increase in usage of generative AI which could further promote fear, anger, and hate
Topics: Artificial Intelligence, Disinformation
Need for understanding and strategic use of new technology for the benefit of public sphere and democracy
Supporting facts:
- Rappler is collaborating with OpenAI to better understand and leverage this technology for public benefit
Topics: OpenAI, Technology, Democracy
The necessity of informing people about how they are being manipulated
Supporting facts:
- The rise of social media and digital platforms has made it easier to spread misinformation and manipulate opinions
Topics: Media Manipulation, Technology
Emphasize transparency and accountability using existing blueprints and values from international institutes
Supporting facts:
- Blueprint for change of the White House by Alondra Nelson and other guidelines by OECD and the Internet Governance Forum
Topics: Transparency, Accountability, Internet Governance
Tech companies have the power to stop digital harms but are choosing not to
Supporting facts:
- Tech companies are choosing not to stop the harms
- Rappler was created on Facebook
Topics: Tech companies, Digital harms, Business models
Governments need to work faster to address the issues posed by the tech industry
Supporting facts:
- Governments are late to the game but now know the problem
Topics: Government, Tech industry, Regulation
Report
The comprehensive review reveals serious concerns about the negative impacts of Generative AI, which reportedly leads to the weaponisation of human emotions, more specifically fear, anger, and hatred. Allegedly, the first human encounter with AI fostered this proliferation of negative emotions.
Furthermore, Generative AI is linked to triggering an epidemic of loneliness and isolation, with glaring evidence drawn from instances of suicide where the detrimental influence of AI has been explicitly cited. Although demonstrating an appreciation for technological innovations, the identified sentiment underscores the necessity for responsible application of technology.
The analysis further outlines the imperative need for robust regulation and enhanced public protection against the misuse of AI. Tech start-ups are highlighted for falsely propagating AI as a preferable companion, a misleading and dangerous perspective intensifying its misuse. Detrimental impacts of technology extend prominently into the realm of disinformation.
A disturbing revelation from MIT posits that lies spread six times faster than the truth, seemingly by design. This problem is particularly exacerbated on mainstream social platforms like Twitter and Facebook, where safety measures to curb unchecked dissemination of misinformation appear to be in rollback.
The predicament becomes more severe in the context of the Global South, where populations are more susceptible to misinformation due to weaker institutional structures. Despite voluntary attempts to curb disinformation implemented between 2016 and 2018, these measures have failed, intensifying the call for more robust regulations.
However, the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence amidst the rapidly evolving technological landscape. Tech platforms are nudged towards accepting responsibility for the harm they cause and moving towards enhanced transparency and accountability.
The newly introduced Digital Services Act (DSA) which provides real-time data, revealing potential harms is recognised as a progressive step towards adjusting to a world now revolving around data. The analysis furthermore exposes a collective call for citizens to redefine their engagement in the digital era, moving from being mere users to active role players.
This sentiment resonates strongly with Maria Ressa's personal experience of being targeted by hate messages, threats, and legal battles. Tech companies are urged throughout the discourse to moderate their greed, reconsider their business models, and take definitive actions to counteract digital harm.
The analysis culminates on a hopeful note, emphasising the role of governments, now aware of the challenges posed by the tech industry and the urgency of accelerating their response accordingly. In essence, the extensive summary reflects a critical evaluation of technological progress, a vital emphasis on the necessity for ethical standards, and enhances the vital role of governments, tech companies, and active citizen participation in managing the myriad challenges posed by the evolving digital landscape.
NS
Nicolas Suzor
Speech speed
143 words per minute
Speech length
1853 words
Speech time
777 secs
Arguments
Existing responses to content moderation are being seriously challenged
Supporting facts:
- The authentication, flow, and trust of media in our ecosystem is being challenged
Topics: content moderation, AI, media
Distinguishment between disinformation and misinformation is difficult
Supporting facts:
- Both malicious actors and ordinary users play a role in spreading harmful, false material
Topics: disinformation, misinformation, media
AI generated synthetic media is hard to label and identify
Supporting facts:
- With the innovations in generative AI, all media can be manipulated
Topics: synthetic media, AI
Tech companies have a role to play in controlling misinformation but such roles can be controversial
Supporting facts:
- Tech companies were able to change their stance during the COVID pandemic and participate in battling disinformation.
- Their methods are still primarily technically based using processes adapted from spam reduction techniques.
Topics: misinformation, tech companies
There is an urgent need for research about media literacy and inoculation
Supporting facts:
- The ability to correctly interpret media is an important goal.
Topics: media literacy, inoculation, research
Technology that learns from an existing system of hierarchy will likely perpetuate existing inequalities.
Topics: Technology, Machine Learning, Inequality, Marginalized Users
Vulnerable people experience a greater proportion of abuse and misinformation hence more needs to be done to protect them.
Topics: Vulnerable Users, Misinformation
Multistakeholderism is important as there is no single solution
Supporting facts:
- Government cannot make all harmful content illegal
- Much of harmful false content spread is lawful
Topics: governance, harmful content, multistakeholder approach
Limits to what governments can achieve
Supporting facts:
- 72% of removal requests for content come from state actors
- State actors sometimes use censorship to hold onto power
Topics: governance, harmful content, censorship
Can't trust the private sector or mainstream media
Supporting facts:
- Disinformation is often amplified by both mainstream media and social media
Topics: private sector, media, disinformation
Report
The analysis provides insightful revelations centred on complex topics such as content moderation, misinformation, AI innovation, synthetic media, and proposed solutions among others. The task of content moderation and the authentication of media are confronting grave challenges in the era of rapidly advancing technology, especially with developments in AI and the emergence of synthetic media.
The intricacies of this landscape are further cemented by the perplexing issue of making a distinction between misinformation and disinformation, necessitating a meticulous exploration of the contributors to the distribution of harmful false material. Evidence of this conundrum arises from misinformed ordinary users and malicious actors alike, both of whom may unknowingly foster the circulation of detrimental content.
Notwithstanding these challenges, advancements in AI and generative AI, initially perceived negatively due to their contribution to synthetic media's creation and dissemination, also hold many potentially beneficial applications. These technological improvements remain inherently neutral, suggesting their suitability for practical application in our progressively digitised age.
However, it becomes increasingly crucial to mount a robust response against the potential repercussions of these innovations, particularly due to the difficulty in identifying and labelling AI-generated synthetic media. Significant attention is garnered by the role of tech companies in regulating misinformation.
The adaptability of these firms in combating misinformation, as evidenced during the Covid-19 pandemic, underpins their potential for positive impact. Nevertheless, these roles invite controversy, primarily owing to the firms' reliance on technical responses rather than human-centred strategies. Regulation of false information presents a complex challenge, particularly around distinguishing between parody, satire, and acceptable speech.
These complexities underscore the inherent challenges within the legal and regulatory framework in addressing this issue. As such, the Oversight Board, an emerging platform for discussing content-related issues, is viewed as a promising solution, notably given its active involvement in cases pertaining to digitally manipulated media.
Amidst these technology-driven changes, there is a need for focused attention towards marginalised communities. Technology risks perpetuating existing inequalities, and there is a call for tech firms to proactively implement measures. The development of comprehensive system safeguards and protections is strongly advocated, noting that vulnerable individuals often bear the brunt of misinformation and abuse.
Multistakeholderism's importance in governance is accentuated, emphasising that no single solution suffices for the pervasive problem of harmful content. Despite their limitations—including censorship utilised to maintain power—state governments' role is sizeable, making a considerable proportion of total content removal requests.
Thus, civil society's active role is critical in resisting state-imposed censorship. In summary, these findings offer a comprehensive perspective of the challenges and potential solutions confronting content moderation, misinformation, and the role of AI in our progressively digital world.
NP
Nezar Patria
Speech speed
120 words per minute
Speech length
515 words
Speech time
258 secs
Arguments
The spread of misinformation and disinformation has increased with the rise in internet usage and development of AI technologies.
Supporting facts:
- 62% of Indonesian internet users have seen information or content online that they believe to be false or dubious.
- Many users doubt the accuracy of news they read on social media.
Topics: Internet usage, AI technologies, Misinformation, Disinformation
Indonesia has developed a comprehensive strategy to counter misinformation and disinformation.
Supporting facts:
- The strategy involves a national digital literacy movement.
- Hoaxes are debunked at the intermediate level and law enforcement activities are supported at the downstream level.
Topics: Misinformation, Disinformation, Indonesian strategies
Report
The digital landscape is exhibiting a worrying trend with the surge in internet usage and AI technologies playing a significant role in heightening the spread of misinformation and disinformation. An alarming 62% of internet users in Indonesia have encountered false or questionable online content, casting doubt on the trustworthiness of news circulated via social media platforms.
In response to these concerns, Indonesia has devised an all-encompassing strategy. Central to this plan is a national digital literacy movement aimed at educating users and fostering greater discernment towards online information. An intermediate level procedure has been introduced to debunk hoaxes, ensuring false assertions are challenged and the truth prevails.
At the downstream level, they have intensified law enforcement activities, holding those propagating misinformation accountable for their actions. This comprehensive strategy presents a benchmark for other nations grappling with misinformation and disinformation. There is a broad agreement on the need for improving digital literacy worldwide.
By equipping individuals with the skills to effectively differentiate between true and false in online contents, societies can proactively tackle disinformation, a process known as "pre-bunking". Furthermore, the current circumstances call for a revised governance structure that rewards sharing accurate information, aiming to dismantle the appeal of spreading false news.
Moreover, the call for collaborative efforts to consistently counter misinformation and disinformation is unequivocal. Especially in the face of emerging technologies such as generative AI, global efforts to embrace such advancements offer a significant defence in this digital battle. To summarise, it's clear that countering misinformation and disinformation calls for a multifaceted approach that includes education, rigorous enforcement, governance alterations, and international cooperation.
PA
Paul Ash
Speech speed
183 words per minute
Speech length
1127 words
Speech time
369 secs
Arguments
Need for pace in tackling disinformation
Supporting facts:
- The rise of new AI systems has really sped that issue up
- The increasing prevalence of authoritarian states
- The impact of disinformation on democratic institutions
Topics: Disinformation, AI
International human rights law should be the foundation of any response
Supporting facts:
- Human rights principles underpin everything
- Information integrity issues could undermine human rights
Topics: Human rights, AI
Report
The analysis underscores a significant and urgent need to address disinformation issues, which have been exponentially fuelled by advancements in AI (Artificial Intelligence). Such disinformation's burgeoning prevalence is seen as a potential threat to democratic infrastructures and institutions and can also undermine essential human rights.
These concerns directly pertain to SDG (Sustainable Development Goal) 16, which promotes Peace, Justice and Strong Institutions. In this context, the analysis conveys a strong sentiment favouring the development of all-embracing solutions. These solutions must integrate governments, various industry sectors, and notably, civil society.
There is a key emphasis on engaging communities impacted by disinformation from response formulation's initial stages. This inclusive approach aligns with SDG 16's broader aspiration for peace and justice. The analysis consensus is that international human rights law should be the foundation of measures taken.
This reflects the sentiment that human rights principles underpin everything and that compromises on information integrity could potentially undermine them. Moreover, the analysis advocates for a value-led institutional response involving multiple stakeholders. This ethos aligns with SDG 17, which calls for global Partnerships for the Goals.
Whilst recognising that multilateral action often lags behind technological advancement's rapid pace, the captured sentiment supports a unique perspective. The conclusion drawn suggests a gradual progress from voluntary commitments towards a formal, ground-up regulatory framework. The experience gleaned from the Christchurch Call incident, notably highlighted in the analysis, stresses the need for an authentic multi-stakeholder response.
This point underlines the essential role of comprehensive cooperation and collaboration in effectively addressing the global disinformation challenge. In summary, the analysis portrays the pressing necessity to confront disinformation proactively, harnessing the principles of SDG 16 and SDG 17 in formulating solutions.
It advocates for a cooperative approach encompassing all society sectors and honouring international human rights laws. Despite inherent challenges, it encourages forward momentum from voluntary action towards regulatory norms, propelling us closer to a digital world characterised by peace, justice, and robust institutions.
RM
Randi Michel
Speech speed
154 words per minute
Speech length
1354 words
Speech time
527 secs
Arguments
Generative AI technologies increase the speed with which bad actors can generate and spread mis- and disinformation, posing threats to human rights, democracy, national security, and public safety
Supporting facts:
- Bad actors can utilize AI to generate realistic content that erodes trust
- Inauthentic content during elections can put the credibility of electoral processes at risk
Topics: Artificial Intelligence, Disinformation, Democracy, National Security
The US government is working to increase transparency and awareness of synthetic content
Supporting facts:
- The Biden-Harris administration has secured commitments from 15 leading AI companies to advance responsible innovation
- Work is underway to develop guidelines, tools, and practices to authenticate digital content and detect synthetic content
Topics: Artificial Intelligence, Disinformation, Information Transparency
Technology solutions are a key element in combatting falsehood content
Supporting facts:
- Differentiating between synthetic and false narratives is crucial
- AI can identify and label synthetic content
Topics: AI, Transparency, Information Integrity
A bottom-up approach is required to build resilience
Supporting facts:
- US State Department announced the Promoting Information Integrity and Resilience initiative to provide technical aid to organizations
- The initiative offers capacity building to local governments and media outlets
Topics: Civil Society, Resilience Building, Information Integrity
Importance of governments in implementing authentication and providence measures
Supporting facts:
- Seeking to build global norm on the issue
Topics: Government policy, Internet security
Technology companies play a key role in providing transparency
Supporting facts:
- Encouragement for voluntary commitments
Topics: Transparency, Technology companies
Engagement of civil society and multi-stakeholder engagement is important
Topics: Civil Society, Multi-stakeholder Engagement
Efforts to advance transparency should not become censorship or infringe on internet freedom
Supporting facts:
- Belief that disinformation should be addressed by disseminating accurate information, not by limiting content
Topics: Internet Freedom, Transparency, Censorship
Report
Artificial Intelligence (AI) technologies have emerged as a proverbial double-edged sword, celebrated for bolstering innovation yet highly criticised for its exploitation in disseminating disinformation, thus threatening human rights, democracy, national security, and public safety. In response to this grave concern, the US government emphasises the importance of transparency and public awareness on synthetic content.
Efforts have resulted in securing commitments from 15 leading AI enterprises to advance responsible innovation, alongside the crucial development of guidelines, tools, and practices to authenticate digital content. These initiatives depict positive strides in mitigating false narratives. The engagement of civil society, academia, and private sector is deemed imperative in addressing issues associated with AI-generated media.
Strengthening local and global cooperation is key in safeguarding people from the adverse effects of fabricated or manipulated media. Meaningful dialogues are ongoing with top AI experts, consumer protection advocates and civil society organisations, reinforcing a collaborative approach. On the technological front, tools are instrumental in waging a war against falsehoods.
AI's role is monumental in identifying and labelling artificial content. However, voluntary commitments from AI companies are viewed as inadequate to circumvent associated risks, prompting the administration to formulate an executive order and advocate bipartisan legislation to guarantee responsible use of AI.
In the global context, a unanimous norm is deemed necessary to tackle the given issue, calling for technology companies to showcase greater transparency. Furthermore, multi-stakeholder engagement is identified as crucial, echoing calls for a collective effort. To build resilience, the US State Department announced the Promoting Information Integrity and Resilience initiative offering technical assistance to organisations and capacity-building to local governments and media outlets.
Summing up, while efforts are underway to enhance transparency and regulate AI misuse, there’s an explicate call to ensure these measures do not curb internet freedom or lead to censorship. The commitment to uphold human rights and democratic freedoms remains paramount.
All in all, the analysis portrays a multifaceted issue necessitating engagement of multiple sectors and nations, responsible innovation, and the establishment of international norms, all whilst respecting individual freedoms and rights.
TY
Tatsuhiko Yamamoto
Speech speed
140 words per minute
Speech length
1877 words
Speech time
806 secs
Arguments
Generative AI can be a valuable tool, but it also has the potential to spread misinformation and disinformation
Supporting facts:
- Generative AI produces content that can contain biases or mistakes
- The technology could interfere with people's ability to make independent decisions
- Generative AI can create large amounts of misinformation instantaneously
Topics: Generative AI, Misinformation, Disinformation
The misuse of generative AI can lead to societal hallucination
Supporting facts:
- Generative AI which contains poison are used as a teacher training data
- This could lead to a potential 'hallucination' or mass deception within society
Topics: Generative AI, Misinformation, Disinformation
The issue of misinformation has become more serious due to the predominance of the attention economy.
Supporting facts:
- Attention economy refers to the increasing abundance of information and the limited time users can devote.
- This situation has led to the use of more sophisticated algorithms and recommendation systems to engage users.
- Misinformation and disinformation can win higher engagement and tends to be disseminated more.
Topics: misinformation, disinformation, attention economy
There are signs of hope in combating misinformation, such as global awareness and improved data literacy.
Supporting facts:
- Awareness of information pollution is now shared across borders.
- There is collaboration between the public and private sector.
- Consensus on improving data literacy and education.
Topics: misinformation, disinformation, data literacy, education
The attention economy business model can amplify harmful emotions like hatred, fear, and anger
Supporting facts:
- In the attention economy, communities can be placed in vulnerable situations
- Hate speech, disinformation and misinformation when combined, become a powerful and dangerous tool
Topics: attention economy, online communities, hate speech, disinformation, misinformation
Companies should take responsibility in mitigating the spread of disinformation online
Supporting facts:
- Fact-checking organizations can issue reports on misinformation and disinformation
- Companies should provide these articles to their users and feature them prominently
Topics: online platforms, company responsibility, fact-checking
Literacy is critical for being better producers and consumers
Supporting facts:
- Tatsuhiko Yamamoto talks about information health which ties in with being more aware of data inflow
Topics: Literacy, Information Consumption
We need to counter tech companies that disseminate harmful information for their profit
Supporting facts:
- Believes that increased awareness can lead to criticism and subsequent structural change of such companies
Topics: Tech Companies, Harmful Information
There is an agreement on the need for immediate action to tackle the issues resulting from the expanding power of tech companies
Supporting facts:
- The platform companies, tech companies, need to be also at the table. This is quite important. Platform is expanding and becoming gigantic and very powerful.
Topics: Tech companies, International collaboration, Immediate action
Tech companies need to be controlled by legislation, proposing the concept of 'digital constitutionalism'
Supporting facts:
- I am a researcher on the Constitution. The Constitution is to control the power of governments, but digital constitutionalism is now emerging as a word.
Topics: Tech companies, Digital constitutionalism, Legislation
Attention should be given to the 'attention economy structure' while tackling these issues
Supporting facts:
- One thing we have to focus is a structure, attention economy structure.
Topics: Attention economy structure
Report
Generative AI technologies have emerged as key contributors to the propagation of misinformation and disinformation in our society. These systems, by virtue of their capabilities and the considerable amount of false content they can produce instantaneously, hold the potential to interfere with people's ability to make independent decisions and disseminate biased or incorrect information on a grand scale.
The examined findings also underscore the potential for societal 'hallucinations' or mass deceptions triggered by the misuse of generative AI. However, these issues are complex and call for multidimensional and intricate solutions, particularly in the face of the potential 'tsunami of disinformation' that generative AI can engender.
The attention economy model further intensifies the challenges posed by misinformation and disinformation. This refers to an environment where a plethora of information is constantly vying for limited user attention, with misinformation and disinformation often securing victory due to their heightened engagement appeal.
This dynamic spearheads the propagation of false or misleading content, thereby providing a fertile landscape for its growth and influence. Tech companies, seen as pioneers in this digital age, have a significant role to play in mitigating this disinformation crisis.
Indeed, there is a pressing call for these corporations to bolster their commitment to combat false information, featuring reports from fact-checking organisations more prominently on their platforms. Strengthening corporate responsibility, coupled with enhanced collaboration in fact-checking efforts, marks the way forward.
Yahoo Japan's alliance with Japan Fact-Check Center is one such successful precedent in combating the spread of misleading content. Importantly, there is a universal agreement on perceiving misinformation as a structural issue requiring targeted redressal at its very roots. This necessitates an enhancement in data literacy and the development of technological standardisation.
The expanding power of tech companies, becoming increasingly domineering, has been identified as a critical area necessitating immediate action. 'Digital constitutionalism' emerges as a novel regulatory concept offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.
In addition to technology-centred solutions, the importance of 'information health' is stressed, advocating for a balanced and unbiased intake of data likened to maintaining nutritional balance in food consumption. Literacy heightens its significance as an essential tool in combating the misinformation epidemic.
An improved awareness and understanding of data intake could instigate a consequential transformation within tech companies known to disseminate harmful or misleading information for profit.
VJ
Vera Jourova
Speech speed
143 words per minute
Speech length
2023 words
Speech time
847 secs
Arguments
The EU is looking into regulating generative AI and disinformation
Supporting facts:
- The EU defines disinformation as the intentional production of misinformation to harm society or electoral processes
- A new chapter is being added to the AI Act in the EU to include principles for controlling and regulating AI, especially high-risk AI applications
- The EU insists on labelling texts and images produced by AI and watermarking deep fakes
Topics: EU, Regulation, AI, Disinformation
Deep fakes, especially those with potential harm, should be controlled or removed
Supporting facts:
- The EU insists on watermarking deep fakes
- The EU is discussing regulations on deep fakes, especially those with potential to manipulate voter preferences
Topics: AI, Deep Fakes, Control
Increase in harmful online practices targeting vulnerable groups such as women, racial and ethnic minorities, LGBTQ+ people
Supporting facts:
- Women in public spaces, politicians, journalists, judges, and NGO leaders are targets of online attacks.
- Aggressive messages and reactions inciting violence are considered illegal under EU law.
Topics: Hate speech, Online harassment, Disinformation
There are regulatory challenges in preventing online disinformation without infriting on freedom of speech
Supporting facts:
- EU is regulating the online space with the main principle being the freedom of speech
- Disinformation has always existed and it's spreading at high speed with internet and social media
- Uncomfortable opinions must not be seen as disinformation by those in power
Topics: regulation, online space, freedom of speech
Support for independent media is key for society to access facts and make free choices
Supporting facts:
- EU is strengthening the power of media
- People have the right to access facts to make autonomous free choices
- EU is supporting independent media
Topics: independent media, fact delivery
Democracies need to be rule makers not takers
Supporting facts:
- Call for democracies worldwide to work together
- Mention of cooperation in G7 on AI code of conduct
Topics: Democracy, Global Cooperation
EU regulation involves civil society, strong media and academic sphere
Supporting facts:
- Stressed the importance of understanding, analyzing through academic involvement
- Belief in the demands of citizens who do not want to be manipulated
Topics: EU Regulation, Civil Society, Media, Academia
Need for more protection for citizens' minds and souls
Supporting facts:
- Worked as a commissioner for five years to protect consumers
- Stressed the importance of not allowing citizens' minds to be 'poisoned'
Topics: Consumer Protection, Citizen welfare
Report
The European Union (EU) is taking strides to regulate generative artificial intelligence (AI) effectively and counteract disinformation's proliferation. These efforts are primarily aimed at high-risk AI applications, particularly deep fakes, which digitally manipulate videos or audio that portray individuals saying or doing things they did not.
Without clear labelling or watermarking, such deceptive content has the potential to significantly influence public perception and inflict harm, notably in electoral processes. Online harassment and hate speech, specifically those aimed at women, racial and ethnic minorities, and the LGBTQ+ community, are noticed to be increasing.
This issue requires immediate attention. However, it involves the challenge of identifying and eliminating such malicious content without infringing upon the principle of freedom of speech, which the EU firmly upholds. The EU, therefore, turns to legislation such as the Digital Services Act to obstruct illegal online content monetisation and forestall disinformation dissemination on the Internet.
A legally binding act, it offers a robust structure for enforcement, complete with penalties for contraventions. Alongside this, the EU is also advocating for a directive against violence towards women, particularly emphasising digital violence. Along with regulation, the EU acknowledges the role of technology companies and social media platforms in controlling disinformation.
The EU urges these corporations to assume responsibility by boosting fact-checking capabilities and complying with a Code of Practice against disinformation— a series of voluntary commitments designed to combat misinformation online effectively. Backing for independent media also comprises a crucial aspect of the EU's wider strategy.
The EU prioritises the capability of media outlets to deliver factual information, thereby aiding citizens in making informed and free choices. EU policy supports these outlets, strengthening their capacity for accurate and autonomous reporting. Within the fight against disinformation, media literacy is recognised as a long-term challenge.
The EU heavily funds media literacy, working in close collaboration with member states on numerous projects designed to enhance the media literacy skills of the populace. Furthermore, the EU emphasises the necessity for global democracies to cooperate in crafting international rules.
It suggests that collaboration on G7 level could be an efficient way to address issues about the AI code of conduct. This global cooperation is perceived as a pivotal step in creating equitable standards across jurisdictions. The EU's collaborative approach to regulation, involving civil society, local media and academia, underlines its commitment to balancing technological advancement with public welfare.
EU emphasises the need for broader protection for citizens' minds and souls—informed by previous experiences—protecting consumers' rights and welfare. Lastly, the EU anticipates stricter regulation for political advertising online. Transparency is the central theme here. Citizens should comprehend the content they consume and not fall for manipulative information.
Transparency in online political advertising isn't simply in line with the broader endeavour to rein in disinformation; it is also considered crucial for promoting a healthier, more democratic society.