×
LLMs like GPT-4 can outperform humans in online debates when given personal data, study finds
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

New research reveals that large language models (LLMs) outperform humans in persuasive online debates, especially when they can leverage personal data to tailor arguments. This breakthrough, detailed in Nature Human Behavior, demonstrates AI’s growing capacity to change minds at scale—creating both promising applications for countering misinformation and concerning implications for digital influence campaigns that could reshape public opinion through personalized persuasion.

The big picture: Researchers found OpenAI‘s GPT-4 was significantly more persuasive than humans in debates, particularly when given access to personal information about debate opponents.

  • The multi-university study involved 900 US participants who provided personal details like age, gender, ethnicity, education level, employment status, and political affiliation.
  • Participants debated one of 30 randomly assigned topics with either another human or GPT-4 for 10 minutes.

By the numbers: GPT-4 demonstrated remarkable persuasive capabilities when equipped with personal information about debate opponents.

  • The AI was 64% more persuasive than humans who had access to the same personal data about their opponents.
  • The model either equaled or exceeded human persuasive abilities across all tested topics.

The psychology factor: Participants were more likely to agree with arguments when they believed they were debating against AI rather than humans.

  • The researchers noted this unexpected finding but couldn’t determine the underlying reasons, highlighting a gap in our understanding of human-AI psychology.
  • “In the context of having a conversation with someone about something you disagree on, is there something innately human that matters to that interaction?” questioned Alexis Palmer, a Dartmouth College fellow not involved in the study.

Why this matters: The findings reveal AI’s growing potential to influence human beliefs and decision-making at scale through personalized persuasion.

  • “Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns,” warns study co-author Riccardo Gallotti, noting we’ve “clearly reached the technological level” where networks of AI accounts could strategically shift public opinion.
  • These capabilities would make real-time detection and debunking of AI-driven influence campaigns extremely challenging.

The counterbalance: The same technology that raises concerns could also provide solutions to misinformation challenges.

  • Gallotti suggests LLMs could generate personalized counter-narratives to educate those vulnerable to online deception.
  • However, he emphasizes that “more research is urgently needed to explore effective strategies for mitigating these threats.”
AI can do a better job of persuading people than we do

Recent News

Nvidia and Foxconn build AI supercomputer to power Taiwan’s tech future

Taiwan's government joins forces with tech giants to create a 10,000-GPU AI supercomputer aimed at strengthening the island's position as a global semiconductor and AI innovation hub.

GitHub unveils Copilot agent that writes and fixes code autonomously

The AI agent automatically handles bug fixing, feature additions, and documentation improvements by analyzing codebases in a virtual environment, with developers maintaining final approval authority.

Builder.ai implodes despite unicorn valuation and Microsoft backing

The UK app development platform shutters despite Microsoft backing and unicorn status, raising questions about AI startup valuations and business fundamentals.