Deepfakes in campaign ads expose limits of Texas election law
Texas election campaigns see growing use of deepfakes in political advertising.
AI-generated political advertisements are becoming increasingly visible in Texas election campaigns, highlighting gaps in existing laws designed to regulate deepfakes in political messaging.
Texas was the first state in the United States to adopt legislation restricting the use of deepfakes in campaign advertisements. However, the law applies only to state-level races. It does not cover federal contests, including the US Senate race that has dominated advertising spending in Texas and featured several AI-generated campaign ads.
Some lawmakers and experts warn that the growing use of AI-generated political content could complicate election campaigns. During recent primary contests, campaign advertisements featuring manipulated or synthetic images of political figures circulated widely across media platforms.
State Senator Nathan Johnson, who has proposed legislation to strengthen the state’s rules regarding deepfakes, said the rapid evolution of AI technology makes the issue increasingly urgent. Johnson argues that voters should be able to make decisions based on accurate information rather than manipulated media.
The current Texas law, adopted in 2019, contains several limitations. It only applies to video content, requires proof of intent to deceive or harm a candidate, and covers material distributed within 30 days of an election. Critics say these restrictions make the law difficult to enforce and limit its practical impact.
Lawmakers from both parties attempted to address some of these issues during the most recent legislative session. Proposed reforms included removing the 30-day restriction, requiring clear disclosure when AI is used in political advertising, and allowing candidates to pursue legal action to block misleading ads. Although both chambers of the Texas legislature passed versions of the legislation, the proposals ultimately failed to become law.
Supporters of stricter regulation argue that the rapid advancement of generative AI tools is making it harder to distinguish synthetic media from authentic content. Some political leaders warn that increasingly realistic deepfakes could eventually influence election outcomes.
Others, however, caution that regulating political content raises constitutional concerns. Some lawmakers argue that many AI-generated political ads resemble satire or parody, forms of political speech protected by the First Amendment.
At the federal level, regulation of congressional campaign advertising falls under the Federal Election Commission’s authority. In 2024, the agency declined to begin a formal rulemaking process on AI-generated political ads, leaving states and policymakers to continue debating how to address the emerging issue.
Experts warn that as AI tools continue to improve, distinguishing authentic political messaging from deepfakes and other forms of synthetic content will likely become more complex.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
