Digital dominance in the 2024 elections

Digital technologies, particularly AI, have been integral to all stages of the electoral process for several years now. What distinguishes the current landscape is their unprecedented scale and impact.

 Advertisement, Person, Text, Poster, Body Part, Hand

As the historic number of voters head to the polls, determining the future course of over 60 nations and the EU in the years ahead, all eyes are on digital, especially AI.

Digital technologies, including AI, have become integral to every stage of the electoral process, from the inception of campaigns to polling stations, a phenomenon observed for several years. What distinguishes the current landscape is their unprecedented scale and impact. Generative AI, a type of AI enabling users to quickly generate new content, including audio, video, and text, made a significant breakthrough in 2023, reaching millions of users. With its ability to quickly produce vast amounts of content, generative AI contributes to the scale of misinformation by generating false and deceptive narratives at an unprecedented pace. The multitude of elections worldwide, pivotal in shaping the future of certain states, have directed intense focus on synthetically generated content, given its potential to sway election outcomes.

Political campaigns have experienced the emergence of easily produced deepfakes, stirring worries about information credibility and setting off alarms among politicians who called on Big Tech for more robust safeguards.

Big Tech’s response 

Key players in generative AI, including OpenAI and Microsoft, joined platforms like Meta Platforms, TikTok, and X (formerly Twitter) in the battle against harmful content at the Munich Security Conference. Signatories of the tech accord committed to working together to create tools for identifying targeted content, raising public awareness through educational campaigns, and taking action against inappropriate content on their platforms. To address this challenge, potential technologies being considered include watermarking or embedding metadata to verify the origin of AI-generated content, focusing primarily on photos, videos, and audio.

After the European Commissioner for Internal Market Thierry Breton urged Big Tech to assist European endeavours in combating election misinformation, tech firms promptly acted in response. 

Back in February, TikTok announced that it would launch an in-app for EU member states in local language election centres to prevent misinformation from spreading ahead of the election year. 

Meta intends to launch an Elections Operations Center to detect and counter threats like misinformation and misuse of generative AI in real time. Google collaborates with a European fact-checking network on a unique verification database for the upcoming elections. Previously, Google announced the launch of an anti-misinformation campaign in several EU member states featuring ‘pre-bunking’ techniques to increase users’ capacity to spot misinformation. 

Tech companies are, by and large, partnering with individual governments’ efforts to tackle the spread of election-related misinformation. Google is teaming up with India’s Election Commission to provide voting guidance via Google Search and YouTube for the upcoming elections. They are also partnering with Shakti, India Election Fact-Checking Collective, to combat deepfakes and misinformation, offering training and resources throughout the election period. 

That said, some remain dissatisfied with the ongoing efforts by tech companies to mitigate misinformation. Over 200 advocacy groups call on tech giants like Google, Meta, Reddit, TikTok, and X to take a stronger stance on AI-fuelled misinformation before global elections. They claim that many of the largest social media companies have scaled back necessary interventions such as ‘content moderation, civil-society oversight tools and trust and safety’, making platforms ‘less prepared to protect users and democracy in 2024’. Among other requests, the companies are urged to disclose AI-generated content and prohibit deepfakes in political ads, promote factual content algorithmically, apply uniform moderation standards to all accounts, and improve transparency through regular reporting on enforcement practices and disclosure of AI tools and data they are trained on.

EU to walk the talk?

Given the far-reaching impact of its regulations, the EU has assumed the role of de facto regulator of digital issues. Its policies often set precedents that influence digital governance worldwide, positioning the EU as a key player in shaping the global digital landscape.

 People, Person, Crowd, Adult, Male, Man, Face, Head, Audience, Lecture, Indoors, Room, Seminar, Speech, Thierry Breton
European Commissioner for Internal Market Thierry Breton

The EU has been proactive in tackling online misinformation through a range of initiatives. These include implementing regulations like the Digital Services Act (DSA), which holds online platforms accountable for combating fake content. The EU has also promoted media literacy programmes and established the European Digital Media Observatory to monitor and counter misinformation online. With European Parliament elections approaching and the rising prevalence of AI-generated misinformation, leaders are ramping up efforts to safeguard democratic integrity against online threats.

Following the Parliament’s adoption of rules focussing on online political advertising requiring clear labelling and prohibiting sponsoring ads from outside the EU in the three months before an election, the European Commission issued guidelines for Very Large Online Platforms and Search Engines to protect the integrity of elections from online threats. 

The new guidelines cover various election phases, emphasising internal reinforcement, tailored risk mitigation, and collaboration with authorities and civil society. The proposed measures include establishing internal teams, conducting elections-specific risk assessments, adopting specific mitigation measures linked to generative AI and collaborating with EU and national entities to combat disinformation and cybersecurity threats. The platforms are urged to adopt incident response mechanisms during elections, followed by post-election evaluations to gauge effectiveness.

The EU political parties have recently signed a code of conduct brokered by the Commission intending to maintain the integrity of the upcoming elections for the Parliament. The signatories pledge to ensure transparency by labelling AI-generated content and abstain from producing or disseminating misinformation. While this introduces an additional safeguard to the electoral campaign, the responsibility for implementation and monitoring falls on the European umbrella parties rather than national parties conducting the campaign on the ground.

What to expect

The significance of the 2024 elections extends beyond selecting new world leaders. They serve as a pivotal moment to assess the profound influence of digital on democratic processes, putting digital platforms into the spotlight. The readiness of tech giants to uphold democratic values in the digital age and respond to increasing demands for accountability will be tested. 

Likewise, the European Parliament elections will test the EU’s ability to lead by example in regulating the digital landscape, particularly in combating misinformation. The effectiveness of the EU initiatives will be gauged, shedding light on whether collaborative efforts can establish effective measures to safeguard democratic integrity in the digital age.