Table of Contents
AI’s Influence on the 2024 Election: A Closer Look
In the early days following the New Hampshire primaries, a robocall featuring an artificially generated voice resembling President Joe Biden caught the attention of voters. This incident acted as a crucial flashpoint, setting off alarms about the potential misuse of artificial intelligence (AI) in the upcoming 2024 United States election. In response, the Federal Communications Commission (FCC) swiftly banned the use of AI-generated voices in robocalls, reflecting a growing urgency to safeguard democratic processes against technological manipulation.
As the nation approaches a pivotal election, this year represents a watershed moment in which AI technology plays a central role in shaping electoral dynamics. With the ability to create convincing audio, video, and images, experts expressed concern over how AI might mislead voters and disrupt the election landscape.
State Legislation Responds to AI Risks
Recognizing the risks associated with the use of AI in politics, sixteen states have enacted legislation to regulate AI’s role in elections and campaigns. Most of these new laws require disclaimers on synthetic media published in the lead-up to elections. For example, the Election Assistance Commission, a federal agency that aids election officials, unveiled an “AI toolkit” designed to offer guidance on how to communicate effectively in a landscape fraught with misinformation.
This legislative momentum aims to empower voters with the information necessary to navigate the digital content they encounter. States have also established dedicated resources to help citizens identify AI-generated material, underscoring a collective effort to mitigate the impact of fabricated information.
Experts Weigh In on Misinformation Challenges
Despite the fears surrounding AI’s potential to create deepfakes — fabrications that depict candidates saying or doing things they never did — experts observed that the anticipated flood of AI-driven misinformation did not emerge during the election season. Instead, traditional misinformation tactics resurfaced, often relying on text-based claims, out-of-context images, and misleading videos.
“The use of generative AI turned out not to be necessary to mislead voters,” said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. Indeed, misinformation persisted on familiar grounds, resulting from conventional propaganda techniques rather than revolutionary AI methods.
As Election Day approached, misinformation about vote counting, mail-in ballots, and other voting processes made headlines. Daniel Schiff, an assistant professor of technology policy at Purdue University, pointed out that there was no overwhelming disinformation campaign aimed at confusing voters about polling locations. This indicated that traditional misinformation strategies were still effective, even as technology continued to advance.
Viral Misinformation Over AI Content
Research suggested that AI-generated claims that gained traction mainly reinforced existing narratives rather than inventing new fabrications to mislead. In one instance, following false claims by former President Donald Trump about Haitians in Ohio, AI-generated images of animal abuse circulated online, tapping into an emotional response rather than creating entirely new falsehoods.
Experts agreed that the traditional dynamics of influence still prevailed, with well-known figures able to disperse misinformation without needing AI-generated content. For instance, Trump’s claims about noncitizen voting resonated with voters, even though such occurrences are exceedingly rare.
The Role of Regulatory Measures
Amid concerns about AI’s disruptive potential, tech companies took proactive steps to limit harmful political speech. Meta, which oversees platforms like Facebook and Instagram, mandated that advertisers disclose any AI usage in political advertising. Additionally, TikTok implemented measures to label certain types of AI-generated content automatically. OpenAI also took significant steps by banning its services from being used for political campaign purposes.
While AI had the potential to escalate misinformation, experts noted that legislation and increased vigilance on the part of tech companies helped curb some of the worst outcomes. Public advocates and researchers worked diligently to address the ramifications of AI on electoral integrity, generating considerable collaborative efforts across various sectors.
Traditional Techniques Still Dominant
The prevalence of low-tech tactics overshadowed the anticipated impact of advanced AI tools. According to Siwei Lyu, a professor of computer science at the University at Buffalo, the muted influence of AI came down to the efficacy of traditional misinformation methods. On platforms like Instagram, messages disseminated by influential accounts relied more on traditional media formats than on AI-generated content.
Additionally, deepfake incidents that did surface were often humorous or satirical rather than malicious. The majority served to magnify prevailing political narratives rather than outright deceive.
Meanwhile, foreign actors that traditionally exploited online platforms found it easier to spread misinformation through staged or fabricated videos rather than relying on AI technology. The Foreign Malign Influence Center noted that existing influence operations did not rely on AI to successfully disseminate disinformation and disorient voters.
The Future of AI and Politics
While AI technology continues to develop rapidly, the efforts of state legislation, social media platforms, and public awareness helped mitigate some threats posed by AI-generated misinformation during the 2024 election. Looking forward, as technology evolves, vigilance will likely remain a cornerstone of safeguarding the democratic process.
As Daniel Schiff emphasized, enhancing deepfake detection and launching public awareness initiatives have proven beneficial. However, tech companies must stay ahead of advancements in AI to ensure future elections remain fair and equitable.
Key Takeaways
The unfolding 2024 election illustrates the complexity of AI’s impact on voter behavior and misinformation. While AI played a significant role in the discussions surrounding this election, traditional misinformation techniques and strong regulatory responses were more prominent. It remains to be seen how new legislative measures and technological safeguards will adapt as AI continues to advance in the political realm. As policymakers, tech companies, and the public remain vigilant, they will work toward ensuring that society can harness the benefits of technology without succumbing to its potential harms.