AI Chatbots Are Shockingly Good at Political Persuasion
Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections

Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Photo by Scott Olson/Getty Images
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns.
Fears over whether artificial intelligence can influence elections are nothing new. But a pair of new papers released today in Nature and Science show that bots can successfully shift people’s political attitudes—even if what the bots claim is wrong.
The findings cut against the prevailing logic that it’s exceedingly difficult to change people’s mind about politics, says David Rand, a senior author of both papers and a professor of information science and marketing and management communications at Cornell University, who specializes in artificial intelligence.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Stephan Lewandowsky, a cognitive scientist at the University of Bristol in England, who was not involved in the new studies, says they raise important questions: “First, how can we guard against—or at least detect—when LLMs [large language models] have been designed with a particular ideology in mind that is antithetical to democracy?” he asks. “Second, how can we ensure that ‘prompt engineering’ cannot be used on existing models to create antidemocratic persuasive agents?”
The researchers studied more than 20 AI models, including the most popular versions of ChatGPT, Grok, DeepSeek and Meta’s Llama.
In the experiment described in the Nature paper, Rand and his colleagues recruited more than 2,000 U.S. adults and asked them to rate their candidate preference on a scale of 0 to 100. The team then had the participants chat with an AI that was trained to argue for one of two 2024 U.S. presidential election candidates: either Kamala Harris or Donald Trump. After the conversation, participants again ranked their candidate preference.
“It moved people on the order of a couple of percentage points in the direction of the candidate that the model was advocating for, which is not a huge effect but is substantially bigger than what you would expect from traditional video ads or campaign ads,” Rand says. Even a month later, many participants still felt persuaded by the bots, according to the paper.
The results were even more striking among about 1,500 participants in Canada and 2,100 in Poland. But interestingly, the largest shift in opinion occurred in the case of 500 people talking to bots about a statewide ballot to legalize psychedelics in Massachusetts.
Notably, if the bots didn’t use evidence to back up their arguments, they were less persuasive. And while the AI models mostly stuck to the facts, “the models that were advocating for the right-leaning candidates—and in particular the pro-Trump model—made way more inaccurate claims,” Rand says. That pattern remained across countries and AI models, although people who were less informed about politics overall were the most persuadable.
The Science paper tackled the same questions but from the perspective of chatbot design. Across three studies in the U.K., nearly 77,000 participants discussed political issues with chatbots. The size of an AI model and how much the bot knew about the participant had only a slight influence on how persuasive it was. Rather the largest gains came from how the model was trained and instructed to present evidence.
“The more factual claims the model made, the more persuasive it was,” Rand says. The problem occurs when such a bot runs out of accurate evidence for its argument. “It has to start grasping at straws and making up claims,” he says.
Ethan Porter, co-director of George Washington University’s Institute for Data, Democracy and Politics, describes the results as “milestones in the literature.”
“Contra some of the most pessimistic accounts, they make clear that facts and evidence are not rejected if they do not conform with one’s prior beliefs—instead facts and evidence can form the bedrock of successful persuasion,” says Porter, who wasn’t involved in the papers.
The finding that people are most effectively persuaded by evidence rather than by emotion or feelings of group membership is encouraging, says Adina Roskies, a philosopher and cognitive scientist at the University of California, Santa Barbara, who also was not involved in the studies. Still, she cautions, “the bad news is that people are swayed by apparent facts, regardless of their accuracy.”
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
