Protecting Democracy against Bots and Plots

16 Jan 2024 17:30h - 18:15h

Event report

Elections in 2024 will have an impact on a combined population of over 4 billion people around the world. As the adoption of generative AI ramps up, so do the opportunity and risk exploited by malicious actors seeking to instil distrust in democratic institutions.

What lessons can be drawn from countries that have successfully defended their elections against cyber threats?

More info @ WEF 2024.

Table of contents

Disclaimer: This is not an official record of the WEF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the WEF YouTube channel.

Full session report

Alexandra Reeve Givens

There is significant concern regarding the role of technology and artificial intelligence (AI) in elections. These advancements have facilitated the spread of misinformation and allowed for targeted messaging to voters. In the current fragmented information ecosystem, there is an increase in mis or disinformation about the state of the world and about candidates. AI technology makes it easier to personalise messages and target specific groups of voters.

Election officials also face significant threats due to technology. They are often underpaid and overworked, and there is potential for them to fall victim to phishing schemes or doxing, which can compromise their privacy and security.

To address these issues, it is important for society to remain conscious of the threats posed by technology in the electoral process. Investing in trust and safety measures is crucial to protect the integrity of elections. There is a growing recognition that authentic and trusted information sources are essential for the proper functioning of elections.

Tech companies have a responsibility to ensure fair elections by taking steps to promote trusted sources of information. Search engines and social media platforms should play a role in surfacing these sources for users. It is important for companies to have clear usage and content policies that prevent people from using their platforms for mass political targeting campaigns. Companies such as OpenAI have already announced policies in this regard.

In discussions at platforms like Davos, companies should be actively involved in addressing the issue of reducing misinformation. They need to share information and provide support not only for US elections but also for elections around the world. Social media companies have learnt from the 2016 elections, and they should continue to track and combat misinformation by keeping their systems in place.

Academics also play a critical role in studying the problem of misinformation and developing interventions. There is now an entire academic field dedicated to studying misinformation and analyzing effective interventions. It is necessary to have interventions in place to navigate the landscape of misinformation in elections.

While some argue for strict regulation of social media platforms to fight misinformation, there is concern about the potential negative consequences. Overregulation can lead to overcorrection and undermine the value of social media as an information sharing platform. Therefore, a whole-society approach that includes media literacy education is necessary to address misinformation effectively.

Techniques such as fact-checking and labeling can be effective in combating misinformation. Signal boosting authentic information and labeling questionable content can help users distinguish between reliable and unreliable information sources.

Concerns have also been raised about the lawfulness of AI decision-making in terms of access to information. There is a fear that governments could exert extreme pressure on tech companies to censor opposition. On the other hand, legislating AI could lead to extreme censorship and control of information, which presents its own challenges.

A key concern is the decision-making power over information that is arbitrated by AI. Questions are raised about whether the CEO of a tech company or a government minister should decide on the ranking of information. Balancing this power is essential to ensure fairness and avoid undue influence.

To address these challenges, smart interventions and transparency in decision-making for AI are crucial. This approach involves enabling the marketplace of ideas to solve the issue, as well as ensuring transparency in AI sanction decisions.

In conclusion, there is a growing concern over the role of technology and AI in elections. Misinformation and targeted messaging pose significant threats to the integrity of elections. Addressing these challenges requires investments in trust and safety, involvement of tech companies, academic research, and a whole-society approach that includes media literacy education. Techniques such as fact-checking and labeling can be effective in combating misinformation. The decision-making power over information must be carefully balanced to avoid undue influence.

Jan Lipavský

The analysis features various speakers who raise important points regarding online communication and content, calling for global solutions and discussions in this regard. They argue that events happening in one country can quickly spread to others through various internet platforms, emphasizing the need for a coordinated approach.

One key observation is the approval of the European regulation on NRA (Network and Information Security). This regulation is seen as a positive step that addresses specific issues related to online communication and content. It is acknowledged that different actors are actively seeking ways to solve these issues, and the European regulation on NRA is expected to contribute to this effort.

However, the spreading of false content during elections is regarded as a negative and disruptive influence. The analysis highlights how false content can disturb the election process and impact the way societies make decisions. This sentiment is echoed by the speakers, who express concern about the increased dissemination of false information and call for measures to counteract this trend.

Another key argument raised in the analysis is the need for governments to globally agree on solutions to combat manipulation through AI and technology. The speakers suggest that governments should take an active role in regulating and controlling the use of AI to prevent misuse and ensure its ethical use. They argue that this is necessary to maintain peace, justice, and strong institutions.

Companies are also called upon to play their part in addressing these challenges. While acknowledging that companies should not bear sole responsibility, the analysis emphasizes the importance of providing guiding principles. Companies are encouraged to ensure that their tools and technologies are not misused for harmful purposes.

The right not to be manipulated by AI is another crucial point addressed in the analysis. It is emphasized that AI has the potential to create content that is difficult to distinguish from reality, which can lead to vulnerable individuals being manipulated. The speakers call for increased emphasis on protecting individuals from AI manipulation and advocate for regulatory measures that ensure user safety.

Additionally, the analysis highlights the role of internet and multimedia platforms in either accelerating or impeding harmful behaviors in society. When these platforms are used to group people into radicalizing blocks, they can contribute to hastening such behaviors. This observation serves as a reminder of the responsibility these platforms have in fostering a safe online environment.

The analysis also notes the importance of corporate social responsibility. Companies are urged to understand that their responsibilities extend beyond financial gains and to ensure that their tools and technologies are not misused for malign processes. This notion aligns with the goal of responsible consumption and production.

Accountability is another key aspect identified in the analysis. It is pointed out that the development of the internet and multimedia platforms often outpaces regulatory frameworks. Therefore, there is a need for some form of accountability from companies to ensure that their actions align with societal expectations.

The analysis also highlights the need to strike a balance between supporting freedom of speech and free journalism and not endangering democratic societies. While celebrating the fundamental right to freedom of speech, it is argued that there should be mechanisms in place to prevent its misuse and protect democratic systems.

In terms of AI legislation, the European Union (EU) is praised for its efforts in this field. The EU’s AI act and other related legislation are seen as positive steps towards effectively regulating AI technology. This reflects an appreciation for the EU’s proactive approach in addressing the challenges posed by AI.

Furthermore, the analysis sheds light on the power held by tech companies, particularly in relation to their AI systems. It is argued that their capabilities to amplify or suppress information make them influential actors. Consequently, there is a call for government intervention to control and regulate these powerful AI systems to prevent potential misuse that could endanger governance.

Throughout the analysis, the importance of human rights in the digital sphere is emphasized. It is suggested that the same human rights that apply offline should also be upheld in digital technologies. Efforts are being made by countries to promote and protect human rights in the context of digital technologies, primarily through resolutions put forth in the European Union and the General Assembly.

Conversely, there is opposition to the establishment of a new set of rules for the digital sphere by countries like Russia and China. It is argued that creating such rules could contradict the principles of free and open digital environments, which can hinder innovation and growth.

Overall, the analysis reveals the complexities and challenges surrounding online communication and content. The need for global solutions and discussions is emphasized, requiring cooperation between governments, companies, and other stakeholders. It underscores the importance of responsible practices, regulation, and protection of human rights in this evolving digital landscape.

Ravi Agrawal

In 2024, major democracies such as India, the United States, Bangladesh, Pakistan, and Indonesia will hold elections, with an unprecedented number of people expected to participate. Ravi Agrawal emphasizes the significance of this year for global democracy, highlighting the influence these elections will have on shaping democratic processes worldwide.

However, concerns are raised regarding the potential negative impact of technology and artificial intelligence (AI) on the integrity and well-being of democratic systems. Agrawal points out that the spread of disinformation, the rise of nationalism, and the potential implications for liberal values present major challenges that must be addressed. Agrawal argues for the need to ensure that technology and AI act as forces for positive change rather than chaos in democratic processes.

Additionally, Agrawal underscores the importance of a global effort to address the challenges posed by technology in elections. He emphasizes that finding solutions should not be the sole responsibility of Western democracies, but rather a collective effort involving countries worldwide. Agrawal stresses the need for a global strategy to address potential risks and ensure that technology benefits democracy universally.

The issue of accessibility and fairness in the field of AI is also raised by Agrawal. He questions whether smaller countries, with limited access to essential AI components such as chips and know-how, may be left behind in the global AI race. This uneven distribution of AI accessibility and sophistication may have implications for global inequality and hinder the democratization of the tech industry.

On a positive note, Agrawal sees potential in technology that aids media fact-checking and verification. He suggests that if affordable and customizable technology enabling accurate fact-checking becomes available, media outlets such as CNN or AP would be likely to adopt and utilize these tools. Additionally, Agrawal emphasizes the potential of technology in identifying and addressing disinformation, envisioning a partnership between media organizations and technology companies to effectively combat this issue.

However, a concern is the dominance of large tech monopolies in the industry. Agrawal points out that three major cloud computing companies continue to dominate the market, while Nvidia and TSMC dominate cutting-edge fabs. This situation may disadvantage smaller countries and companies and impede the democratization of the tech industry.

Regulating technology, particularly AI systems, poses another challenge due to their non-deterministic nature. Agrawal argues that even well-designed and well-restricted AI systems can be manipulated to work around limitations, making regulation a complex task.

Lastly, Agrawal highlights the challenge of combating myths and disinformation, particularly within populations that are not technologically savvy or literate. He notes the low literacy rates in some Indian states, which are as low as 50-60%, and the fact that hundreds of millions of people in Africa and India gained internet access through smartphones. Addressing this challenge is crucial to ensure an informed and engaged citizenry.

In conclusion, Agrawal’s analysis emphasizes the significance of the upcoming elections in major democracies in 2024 for global democracy. While acknowledging the potential risks associated with technology and AI, he also explores the potential for positive change and stresses the need for a global effort to harness the benefits of technology universally. The accessibility and fairness of AI, the dominance of tech monopolies, the complexity of regulating technology, and the challenge of combating disinformation in populations lacking tech literacy are key areas that require attention and action.

André Kudelski

The analysis of the given statements reveals several important insights into the challenges and potential solutions related to the issue of cybercrime and misinformation. Firstly, it is noted that there is a lack of global laws to effectively combat cybercrime. This issue is further complicated by the fact that cybercriminals can originate from anywhere in the world, making regulation and enforcement difficult.

On the other hand, there is a consensus among advocates for the application of technology in fighting cybercrimes. Specifically, proponents argue in favor of content traceability and the use of technology solutions to identify fake content. These measures would help to establish accountability and deter the spread of misinformation and malicious content.

Another noteworthy observation is the existence of a population that may not be interested in verifying the truthfulness of the information they receive. This implies a certain level of apathy towards the accuracy of information and highlights the importance of promoting media literacy and critical thinking skills.

The analysis also suggests that it is possible to verify the authenticity of videos through the use of artificial intelligence (AI) and the implementation of rules that prevent manipulation. With the right AI algorithms, it becomes feasible to trace the source of the content, thus enhancing verification efforts.

Moreover, the analysis highlights the potential of technology, such as watermarking and blockchain, to introduce traceability in content, similar to how it is used in the identification of components in the food industry. This combination of technologies can help establish trust and ensure the integrity of digital information.

The importance of media literacy is emphasized, as it allows individuals to form their own opinions rather than blindly accepting or rejecting information. This empowers individuals to critically evaluate content and make informed decisions.

The role of innovation in maintaining an honest ecosystem is also underscored. New initiatives and elements can offer different perspectives, challenging and improving the existing system.

The analysis supports the idea that government should regulate to prevent abuses, but ultimately, the power should lie with the people. This highlights the need for a balance between government intervention and individual autonomy.

AI is acknowledged as a tool that can challenge perceptions and present different perspectives. This can lead to a more comprehensive understanding of what is considered right or wrong in various contexts.

Finally, it is argued that education should enable individuals to understand different views and form their own opinions. By equipping individuals with critical thinking skills, education plays a crucial role in promoting a more informed and discerning society.

In conclusion, the analysis of the given statements highlights the need for international cooperation in addressing cybercrime, the potential of technology in combating misinformation, the importance of media literacy, and the role of innovation, government regulation, AI, and education in promoting a more informed and responsible society. These insights provide a comprehensive understanding of the challenges and potential solutions associated with cybercrime and misinformation.

Smriti Zubin Irani

The analysis consists of five statements that shed light on various aspects of India’s democratic system and digital readiness.

The first statement highlights that India is digitally prepared for elections. It is mentioned that the elections in India are conducted electronically, indicating a significant step towards embracing technology in the electoral process. Furthermore, AI watermarking is used to verify the authenticity of information sources, ensuring the accuracy and credibility of the data involved. This reinforces the commitment to a transparent and secure election process in India. The fact that 945 million Indians qualify as voters, with 94% of them being bio-authorized, underlines the substantial size and scope of the electorate. It is also mentioned that an impressive 70% of these eligible voters exercise their right to vote, reflecting the high level of civic participation in the country.

The second statement focuses on the promotion of citizen engagement in governance and policymaking through digital platforms. The MyGov platform is specifically highlighted as an avenue for citizen engagement in policy making. Citizens are encouraged to provide inputs and suggestions, which are then taken into consideration while framing policies, such as the interim budget. This demonstrates a commitment to inclusivity and involving the public in decision-making processes, thereby strengthening the democratic fabric of India.

The third statement highlights the empowerment of grassroots democracy in India, with 1.5 million women being voted into office at the grassroots level. This showcases the progress made in promoting gender equality and women’s representation in politics. The inclusion and participation of women in decision-making processes at the grassroots level is a positive step towards achieving the Sustainable Development Goal of gender equality.

The fourth statement emphasizes the importance of multiple independent pillars in serving democracy. It is stated that democracy in India is not solely reliant on the government but also supported by independent media, a fair judiciary, and the robust engagement of officials in the democratic process. This multi-dimensional approach ensures a system of checks and balances, safeguarding the principles of democracy and upholding justice.

The final statement suggests that despite having tools like MyGov, the government does not hold excessive power. It is mentioned that election processes are delinked from the government and politics, ensuring a balance of power. This indicates a commitment to maintaining the integrity and separation of powers within the democratic system.

In conclusion, the analysis highlights India’s digital readiness for elections, with a focus on transparency, security, and participation. It further showcases the promotion of citizen engagement and the empowerment of grassroots democracy in the country. Additionally, it emphasizes the importance of multiple independent pillars in serving democracy and ensuring a balance of power. These observations demonstrate India’s commitment to building a strong democratic system while addressing important issues such as inclusivity, gender equality, and citizen participation.

Matthew Prince

Cloudflare, an AI company, uses AI and machine learning systems to predict and mitigate threats and vulnerabilities in order to protect their clients. With their unique position in front of 20-25% of the internet, they have a significant advantage in accurately identifying potential risks. They also prioritize accessibility by collaborating with tech giants such as Microsoft and Google to make their technologies more widely available.

Matthew Prince, CEO of Cloudflare, advocates for a stable global governmental infrastructure and actively works with NGOs to improve election systems worldwide. He recognizes the potential for technology to help the media in distinguishing between authentic and fabricated content. However, he emphasizes that it is the media’s responsibility, rather than Cloudflare’s, to determine the accuracy of information.

Regulating AI poses challenges due to its non-deterministic nature, making it difficult to control or predict specific outcomes. Although regulation may hinder innovation, Prince suggests a cautious approach, focusing on aspects that can be controlled.

In summary, Cloudflare utilizes AI and machine learning to anticipate and address threats and vulnerabilities, while promoting accessibility through collaborations. Matthew Prince prioritizes a stable governmental infrastructure and acknowledges the role of technology in assisting media. The regulation of AI presents challenges, requiring a careful and focused approach.

AR

Alexandra Reeve Givens

Speech speed

226 words per minute

Speech length

2007 words

Speech time

533 secs

AK

André Kudelski

Speech speed

160 words per minute

Speech length

688 words

Speech time

259 secs

JL

Jan Lipavský

Speech speed

150 words per minute

Speech length

1366 words

Speech time

547 secs

MP

Matthew Prince

Speech speed

189 words per minute

Speech length

1583 words

Speech time

502 secs

RA

Ravi Agrawal

Speech speed

193 words per minute

Speech length

2122 words

Speech time

659 secs

SZ

Smriti Zubin Irani

Speech speed

160 words per minute

Speech length

638 words

Speech time

239 secs