OpenResearch, the OpenAI-funded research nonprofit, has successfully tested AI chatbots as polling assistants in its ongoing unconditional cash transfer study, with more than three-quarters of respondents choosing to engage with the bot-assisted survey format. The breakthrough could transform the polling industry by enabling researchers to conduct qualitative research at scale while gathering richer, more nuanced data than traditional multiple-choice surveys allow.
What you should know: The AI-assisted polling approach produced significantly more engaged respondents and comprehensive data than traditional survey methods.
- Participants who chose the chatbot option spent a median of 16 minutes on the survey, offering detailed responses that wouldn’t be possible with standard multiple-choice formats.
- The AI could probe answers and ask for clarification or expansion, providing OpenResearch with much richer datasets.
- Two-thirds of respondents rated the experience positively, with a similar percentage saying they would participate again.
Why this matters: Traditional polling faces mounting challenges as landlines disappear and response rates decline, while online surveys remain constrained by limited engagement and rigid question formats.
- Pollsters have struggled with accuracy since missing Trump’s support levels in 2016, partly due to difficulty reaching representative samples.
- AI-assisted surveys could blend the depth of qualitative research with the scale of quantitative polling, something previously impossible due to cost and time constraints.
- The technology allows researchers to focus on what they want to learn rather than how to precisely word questions for universal understanding.
How it works: The chatbot system adapts to individual responses rather than forcing uniform question formats across all participants.
- Instead of asking rigid yes/no questions about employment status, for example, the AI can handle nuanced responses covering everything from steady work to gig economy participation.
- “Before AI, we had to ask the same questions in exactly the same way,” said Elizabeth Rhodes, OpenResearch’s research director.
- The system can customize interactions based on how respondents engage, potentially making surveys more conversational and less interrogative.
The human factor: Early evidence suggests people may be more willing to share sensitive information with AI than human interviewers.
- Three-quarters of respondents said they felt comfortable honestly describing their stress experiences to the chatbot.
- Shuwei Fang, a Fellow at Harvard’s Shorenstein Center, describes this as “the intimacy dividend” — the potential for people to speak more frankly without fear of human judgment.
- OpenResearch experimented with programming empathy into chatbot responses, though reactions were mixed.
Room for disagreement: Not all participants embraced the AI-assisted format, with about one-fifth finding the interaction “creepy.”
- Fuller, more nuanced answers create new challenges for researchers in sorting and categorizing responses.
- The field still needs to establish standards and ethical frameworks for AI-assisted polling, according to Rhodes.
- Questions remain about consistency and comparability across different AI systems and implementations.
The bigger picture: This represents a shift from using AI primarily for efficiency gains to creating entirely new capabilities in research and data collection.
- Rather than simply automating existing processes, AI chatbots enable polling methodologies that were previously impossible at scale.
- The technology could make polling more dynamic, with questions adapting to responses in real-time rather than following predetermined scripts.
- Companies like Aaru, an AI research company, have already experimented with AI polling for political predictions, though with mixed accuracy results.
Polling gets an AI assist