AI’s influence on the 2024 election – not as damaging as feared

Friday, November 15, 2024

As the dust settles after the 2024 election, concerns over artificial intelligence’s role in campaign tactics and misinformation haven’t fully dissipated. But many experts are acknowledging that, while AI had a definite impact, it wasn’t as destructive to democracy as some had feared.

Before the election, technologists and democracy advocates sounded the alarm about AI’s potential to amplify mis- and disinformation. There were warnings about the power of AI to quickly generate convincing fakes and spread misleading narratives, which could manipulate voters at an unprecedented scale. And while AI-generated disinformation did surface, it seems it wasn’t entirely unchecked. Election monitors and researchers observed and, in some cases, countered the AI-driven tactics, particularly as they became more visible in the final weeks before Election Day.

Disinformation did indeed appear in AI-powered forms, including deepfakes and forged claims designed to malign candidates and cast doubt on election integrity. Notable examples included false allegations of misconduct against vice-presidential nominee Tim Walz and doctored videos showing election officials destroying ballots, which were ultimately revealed as foreign misinformation efforts.

A particular focus for democracy advocates was the rise of AI-driven databases, such as EagleAI, used to question the legitimacy of certain voters. This tool was reportedly used by organized groups to target vulnerable voters—including students, same-day registrants, and U.S. service members overseas—through ballot challenges. Although voter intimidation by AI remains a troubling development, it seems unlikely that such efforts significantly altered the overall outcome.

The rollback of social media misinformation policies also raised alarms, particularly as platforms like Meta and X (formerly Twitter) moved away from flagging or restricting election misinformation. This allowed more dubious content to flow freely, as companies prioritized open political dialogue over content moderation. Yet, despite a looser approach to enforcement, the anticipated onslaught of disinformation didn’t materialize in overwhelming force, largely due to watchdog groups’ and media organizations’ vigilance.

Election misinformation also found its way onto less traditional platforms, like Discord and Twitch, which were untested in the political arena. Yet the lack of established trust and safety teams on these platforms didn’t lead to major breakdowns in public confidence, though they will likely be more closely monitored in the future.

Influencers, in particular, became potent channels for political messages this election season, often operating outside the ethical standards traditionally applied to journalists. Some influencers received financial backing from campaigns, an area that remains opaque due to lax disclosure laws. Yet the role of influencers didn’t ultimately erode voter trust in the process as much as anticipated.

Looking back, while AI did shape the 2024 election in various ways, its influence may not have been as insidious as once feared. Swift actions by journalists, election monitors, and concerned citizens helped to curb the spread of deceptive narratives and protect the electoral process. The experiences of this election cycle have shown that, even with the rise of generative AI, society’s vigilance and resilience can uphold democratic integrity.

As the nature of campaigns and information-sharing continues to evolve, there are lessons to be learned for future elections. Strengthened disclosure laws for influencers, ongoing research into the impact of digital misinformation, and more robust trust and safety policies across social media will be essential. AI’s role in politics isn’t going away, but with thoughtful strategies and watchful eyes, we can navigate these technological shifts without compromising democracy.

Respond to this story

Posting a comment requires free registration: