AI robocall threats loom over US election
State officials across the US are preparing for deepfake robocalls to prevent voter misinformation during the upcoming election.

Election officials across the US are intensifying efforts to counter deepfake robocalls as the 2024 election nears, worried about AI-driven disinformation campaigns. Unlike visible manipulated images or videos, fake audio calls targeting voters are harder to detect, leaving officials bracing for the impact on public trust. A recent incident in New Hampshire, where a robocall falsely claimed to be from President Biden urging people to skip voting, highlighted how disruptive these AI-generated calls can be.
Election leaders have developed low-tech methods to counter this high-tech threat, such as unique code words to verify identities in sensitive phone interactions. In states like Colorado, officials have been trained to respond quickly to suspicious calls, including hanging up and verifying information directly with their offices. Colorado’s Secretary of State Jena Griswold and other leaders are urging election directors to rely on trusted contacts to avoid being misled by convincing deepfake messages.
To counter misinformation, some states are also enlisting local leaders and community figures to help debunk false claims. Officials in states like Minnesota and Illinois have collaborated with media outlets and launched public awareness campaigns, warning voters about potential disinformation in the lead-up to the election. These campaigns, broadcasted widely on television and radio, aim to preempt misinformation by providing accurate, timely information.
While no confirmed cases show that robocalls have swayed voters, election officials regard the potential impact as severe. Local efforts to counteract these messages, such as public statements and community outreach, serve as a reminder of the new and evolving risks AI technology brings to election security.