ÌÇÐÄÊÓÆµ


AI rivals humans in political persuasion

language
Credit: Google DeepMind from Pexels

New research reveals that people find AI-delivered political arguments convincing. This could help bridge political divides—or fuel polarization.

As become increasingly ubiquitous, it's only a matter of time before they're tasked with generating —if they're not already doing so. New research by two professors at Stanford Graduate School of Business explores how people respond to AI-generated political appeals and the implications.

"We were interested in whether AI-generated messages could be as persuasive as messages that were generated by humans," says Robb Willer, a professor of organizational behavior (by courtesy) at Stanford GSB, who investigated this question in an article in Nature Communications with a team of researchers from the Politics and Social Change Lab (which he directs) and the Stanford Institute for Human-Centered AI.

"Quite consistently, across a range of policy issues, that's exactly what we found: Messages developed by AI are about as persuasive as those developed by people, though readers did not know the source of the message."

Zakary Tormala, a professor of marketing at Stanford GSB, had a different question. Does knowing that a message was written by AI change how people respond to it? With Louise Lu, a doctoral candidate who led the project, and Adam Duhachek from the University of Illinois Chicago, Tormala tested how receptive people are to ideas when they hear that a message comes from AI versus a human source. This approach, in Scientific Reports, revealed that people are more open to hearing about opposing policy positions when they're presented by AI.

"We were thinking AI might sometimes do better than human sources in this context because people believe AI has no persuasive intent, is not inherently biased, and has access to a wealth of information," Tormala says. "In our studies, all of these factors increased people's willingness to engage with arguments presented by AI that countered their own beliefs.

In essence, Willer tested the message, Tormala the messenger.

Argument by LLM

Willer and his colleagues ran three surveys in which they measured participants' opinions on a range of policies, including a public smoking ban, gun control, a carbon tax, and automatic voter registration. Participants read messages in support of these policies that were generated either by AI or by people. (A control group saw messages unrelated to the policy.) A survey at the end measured whether people's opinions had shifted.

"The only thing that varied was the content of the essays, and we saw similarly persuasive effects whether it was written by a human or an AI," Willer says. The strongest effects, notably, emerged among people who already supported the policies—the AI-written messages nudged them more deeply into their conviction.

"We also found that people had different perceptions of these persuasive appeals," Willer says. Although the messages' authorship was not revealed, participants consistently suggested that the power of appeals written by people came from their use of narrative and references to personal experience. Appeals written by LLMs were considered persuasive due to their logical reasoning and clear presentation of facts.

This finding complements Tormala and Lu's study, which found that people are more receptive to messages opposing their stances when they are told that those messages come from AI, as they consider AI to be more informed, less biased, and less intent on persuasion than a person.

"When a person tries to convince us of their position, we often don't want to hear it," Tormala says. "Especially if the person is on the other side, we assume we know what they think, that they're uninformed and biased, and that they're simply trying to persuade us or win an argument. So we tend to dismiss them." AI, on the other hand, is viewed as highly informed and more objective.

Lu led the team through a series of experiments that all followed the same fundamental design. Participants received messages on a particular policy issue that argued against their existing beliefs. If they were in favor of vaccines, for instance, they read a message presenting the case against vaccination. Everybody read the same message—these were prepared by the researchers—but some were told the arguments were crafted by people, while others were told they were generated by AI.

The first two experiments demonstrated that people are more receptive to counterarguments when they believe they have come from AI. The third experiment demonstrated a greater willingness to share and seek out more information about counterarguments when the origin was thought to be AI. The fourth experiment showed a lower level of animosity toward the opposing side when the messenger was believed to be AI.

"We first see that people, when faced with a message from AI rather than another person, are more open to the other side. They're more receptive to listening to the opposing position. Then we see that they're more willing to share these ideas with others, and they even start to see the other side differently, to consider the other side a bit more reasonable," Tormala says. "There's a cascading set of outcomes that are tied to the initial finding on openness."

Machine politics

Lu notes that these findings could inform "little tools to chip away" at the problem of political polarization. For example, if social media companies try, in , to present consumers with more accurate and balanced information, then attributing that information to AI could make users more receptive to hearing differing opinions and could lead them to process diverse viewpoints less defensively. "If leveraged at scale," says Lu, "this approach might offer one small way to help curb polarization."

"That said, our research is agnostic to whether or not the information is accurate," Tormala says. If people are more receptive to AI-generated messages regardless of their accuracy, this could facilitate the spread of misinformation. The takeaway, he says, "is not that 'Hey, AI is going to save us,' but that people react differently to information when they think it comes from AI. This reaction could be good or bad depending on the information they're receiving."

Willer, too, notes a range of implications that stem from his team's findings. On the one hand, the persuasive effects of AI were relatively small. LLMs may make the production of political messaging more efficient, yet they may only shift opinions by a small percentage. This alone isn't an outcome that would "destabilize society," Willer says.

On the other hand, the fact that AI-generated messages reinforced people's existing beliefs in Willer's studies suggests that their use could increase polarization. Willer thinks it's possible that during the 2026 U.S. midterm elections, foreign or domestic actors could use artificial agents to flood social media with campaigns designed to drive a deeper wedge between Americans.

"We already have such negative discourse around politics in this country, and it would not be technically difficult to massively scale up AI-generated content and make things worse," he says. "The mind boggles at the different ways this could be used to threaten our already-frazzled democracy."

More information: Hui Bai et al, LLM-generated messages can persuade humans on policy issues, Nature Communications (2025).

Louise Lu et al, How AI sources can increase openness to opposing views, Scientific Reports (2025).

Journal information: Nature Communications , Scientific Reports

Provided by Stanford University

Citation: AI rivals humans in political persuasion (2025, November 11) retrieved 12 November 2025 from /news/2025-11-ai-rivals-humans-political-persuasion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

AI-generated arguments changed minds on controversial hot-button issues, according to study

1 share

Feedback to editors