Polls 10 Times Faster and 10 Times Cheaper: The Era When AI Reads the Political Climate

Polls 10 Times Faster and 10 Times Cheaper: The Era When AI Reads the Political Climate

Can AI Listen to "Public Opinion"? — A Quiet Revolution Begins in the Field of Surveys

"What is the first image or emotion that comes to mind when you hear the word 'politician'?"

The voice coming from the other side of the phone sounds like a young woman. It is calm, businesslike, and the conversation flows naturally. However, the owner of that voice is not human. It is an AI agent, a program-driven listener.

While respondents express distrust or cynicism towards politicians, multiple AIs are working behind the scenes. There is an AI that checks if the response aligns with the question, another AI that encourages deeper exploration if the content is shallow, and an AI that tries to detect if the respondent is a fraudulent participant or a bot. This is not a story from a futuristic lab. It is a scene from a real political survey conducted by the French AI polling company Naratis.

For a long time, opinion polls have been an essential tool for visualizing the voice of society. Approval ratings before elections, opinions on policies, evaluations of company products, and attitudes towards social issues. The numbers presented in newspapers and on television have influenced the decisions of politicians, companies, the media, and voters themselves.

However, the foundation of this is now shaking. Fewer people answer phone calls, and fewer participate in surveys. Many ignore unknown numbers and do not engage with long questionnaires. As the number of respondents decreases, surveys become more expensive and biased. This is where the automation of opinion polls by AI comes in.

Pierre Fontaine, the founder of Naratis, explains the company's strength as "having people converse with AI rather than selecting checkboxes." Traditional quantitative surveys gather a large number of options like agree, disagree, or neutral. In contrast, Naratis aims for the realm of qualitative research, which is more time-consuming and costly. This method explores not only "what people think" but also "why they think so" through interviews and group discussions with a small number of people.

This area is highly valuable for political campaigns and corporate brand research. For example, why do people who hear a candidate's slogan feel positively about it? Do those who oppose a policy proposal dislike the content of the system or the way it is explained? True clues may be hidden in emotions, associations, and hesitations that cannot be captured by numbers alone.

AI has the potential to scale up this qualitative research rapidly. If a human interviewer conducts each interview, it takes time and labor costs. But with AI, multiple conversations can run simultaneously. Naratis claims it is "ten times faster, ten times cheaper, and 90% accurate" compared to human-led surveys. Surveys that once took weeks and tens of thousands of euros could be completed in a day or two.

This speed has significant implications in the political arena. During election periods, public opinion can change rapidly due to a single statement, a gaffe, or an international incident. With traditional surveys, it is not uncommon for the situation to have changed by the time results are available. If AI can gather reactions within 24 hours, campaigns can almost read voters' emotions in real-time and adjust their messages accordingly.

However, this is where the initial danger lies. Understanding public opinion faster also means being able to influence it faster. If AI suggests "this expression can avoid anger" or "these words can mobilize the support base," politicians and campaigns can throw words more precisely tailored to voters' dissatisfaction and anxiety. For democracy, is this progress in dialogue or an advancement in emotional manipulation?

In discussions about AI opinion polls, it is important to consider two technologies separately. One is the method where AI substitutes for interviews with real humans. The other is the method where AI creates "synthetic respondents" or "digital twins" to answer in place of actual humans.

The former automates the listener and analyst in surveys. The respondents are still human. The latter, based on past data and attribute information, allows AI to speculate "how such a person would answer." While this may be useful for concept testing or hypothesis building in market research, it becomes a serious issue in political surveys.

This is because opinion polls in politics are not merely business documents. Published approval ratings are reported, influencing donations and voting behavior, and creating momentum for candidates. If AI-generated responses are treated the same as public opinion gathered from humans, it is possible that what was measured was not "public opinion" but merely something resembling it.

Existing survey companies are cautious about this point. While Ipsos, a major market research firm, utilizes AI, there is strong caution against using AI-generated respondents in politically sensitive surveys. Bruno Jeanbart, CEO of OpinionWay, also clearly states that they will not publish opinion polls based on AI-generated data. The reason is clear: the biggest asset for the survey industry is "trust."

On social media, reactions to AI opinion polls lean more towards caution than expectation. Particularly in English-speaking tech communities, expressions like "AI opinion polls are fake opinion polls" have spread, and on Reddit, criticisms such as "it's not real humans, just large language models generating responses according to rules" are prominent. Another user argued that even if the methodology is written in small print, many people won't read it, so it should be indicated in a way that cannot be overlooked that the responses are AI-generated.

However, not all reactions are entirely negative. On LinkedIn, there is a discussion that synthetic audiences by AI should be positioned as "predictive models" rather than "actual opinion polls." In other words, it makes sense to use AI not as a substitute for humans but for hypothesis testing, bias detection, and initial message testing. The issue is not using AI itself but presenting simulations as if they were actual data, according to this perspective.

This difference in reactions well reflects the essence of AI opinion polls. Seen as a convenient tool, AI expands the possibilities of surveys. There may be people who find it easier to speak their minds to a machine, even on topics difficult for humans to answer. People who might put on a facade or give socially desirable answers to human listeners might be more candid with AI. In France, far-right support is said to have been underestimated in opinion polls, but if such "difficult-to-express truths" can be captured, AI interviews have certain advantages.

Moreover, AI is also suitable for digging deeper into responses. In usual surveys, the reasons for answering "disagree" are often not sufficiently explored. However, with conversational AI, it can continue to ask, "Why do you think so?" "When did that opinion change?" "What events left an impression on you?" AI is also adept at classifying a large number of free responses and organizing emotions and issues.

Still, AI has critical weaknesses. Firstly, AI can produce plausible mistakes, known as hallucinations. In the world of surveys, even a slight distortion can lead to major misunderstandings. Secondly, AI heavily relies on past data. It may treat opinions frequently expressed in the past, narratives abundantly left on the internet, and values from English-speaking regions or urban areas as more general than they are in reality.

Thirdly, AI tends to lean towards "average plausibility." Human opinions are contradictory, emotional, and fluctuate depending on the situation. One might agree with a policy but dislike the politician proposing it. Concerns about household finances and environmental awareness may clash. If AI generates too polished responses, such human-like fluctuations are erased. As a result, a too-clean public opinion may emerge.

In fact, research reviews on synthetic respondents point out that while high-level averages may approach human responses, problems arise in details such as differences by attribute, variation, correlation, and regression coefficients. In the political arena, it is precisely these details that matter. Even if the overall average is correct, if the responses of specific regions' independents, young people, voters with immigrant backgrounds, or elderly in rural areas are misjudged, election strategies and policy decisions will be wrong.

More serious is the issue of accountability. Traditional opinion polls also had limitations. Bias in survey targets, leading questions, refusal to answer, weighting methods—no survey is perfect. Still, at least by showing who was asked, when, how many, and what the questions were, there was room for external verification.

With AI, this verification becomes complex. Which model was used? What data was it trained on? How was the depth of responses evaluated? How were fraudulent responses detected? If synthetic respondents were used, what was the basis for creating their personas? If such information remains opaque and only numbers are published, it becomes an endorsement of a black box rather than a survey.

As AI opinion polls spread, discussions on regulation will be unavoidable. Especially in the political field, if surveys based on AI-generated data are published, clear labeling obligations or prohibition rules may become necessary. In countries like France, where regulation on opinion polls is relatively strong, there may be restrictions on publishing political surveys using synthetic data.

So, will AI make opinion polls more accurate?

The answer is not simple. AI is strong in speed, cost, analysis of free responses, and conversational depth, areas where traditional surveys struggle. For an industry troubled by a shortage of human interviewers and declining response rates, it is undoubtedly an attractive solution. If AI supplements and analyzes conversations with real humans, it has the potential to enhance the quality of opinion polls.

However, the moment AI starts "responding" in place of humans, the story changes. It is not a measurement of public opinion but a speculation, a simulation, a model output. It can be useful. But one must be cautious in calling it public opinion.

The future mainstream will likely be a hybrid model rather than full automation. AI throws questions, organizes responses, detects outliers, and forms hypotheses. Human researchers supervise the design, verify the results, and bear political and ethical responsibility. AI increases the ears, but ultimately, humans decide what is considered heard. Drawing that line will be crucial.

AI opinion polls may make democracy more convenient. They have the potential to capture unheard voices, visualize complex emotions, and bring policies and reporting closer to reality. However, at the same time, there is a danger of synthesizing, manipulating, and creating a false sense of understanding of public opinion.

The essence of opinion polls is not to create numbers. It is to understand what people living in society fear, what angers them, what they desire, and where they are confused. If AI assists in this task, it will be a welcome advancement. But if AI starts speaking public opinion in place of humans, it will become a dangerously convenient tool for democracy.

Ultimately, the most important thing in opinion polls in the AI era is not how smart AI is. It is how honestly the surveyors can explain what they entrusted to AI and what they heard from humans. Will AI save opinion polls that have lost trust, or will it make them even more questionable? The answer depends not on technology but on the transparency of its use.


Summary of Reactions on Social Media

 

Reactions to AI opinion polls on social media and in comment sections are broadly divided into three categories.

The most common is strong distrust. In Reddit's tech community, reactions emphasizing that "if it's not asked to real humans, it's not an opinion poll" are prominent regarding the method of simulating respondents with AI. There is a strong concern that if surveys using AI-generated responses spread only through headlines and graphs, readers might mistakenly believe they are actual surveys.

The second is a pragmatic view that "it can be useful if the use is limited." On LinkedIn, there are opinions that synthetic audiences by AI should be treated as predictive models or hypothesis verification tools rather than measurements of public opinion. It can be used for initial message testing and model bias confirmation, but it is not a replacement for the actual human voice.

The third is distrust of existing opinion polls themselves. On social media, there are voices questioning, "Aren't surveys with humans also full of bias?" indicating that the issue is not just with AI, but the trust in opinion polls as a whole is wavering. Criticism of AI opinion polls also reflects dissatisfaction with traditional surveys.


Source URL

BBC "Will AI lead to more accurate opinion polls?"
Central information sources of this article, including AI agent political opinion polls, Naratis, Ipsos, OpinionWay, and concerns about synthetic data. Full uploaded text also referenced.
https://www.bbc.com/news/articles/cwyw6rylzepo

Reconstructed article by Info Nasional based on BBC article
For content verification of the BBC article. Check Naratis's claims, declining response rates, and OpinionWay's cautious stance.
https://world.infonasional.com/ai-agents-political-opinion-polling

Reddit "“AI polls” are fake polls"
Reference for social media reactions. Confirm reactions criticizing AI opinion polls as "not real human voices," concerns about labeling obligations, and misinterpretations.
https://www.reddit.com/r/technology/comments/1sjdfvj/ai_polls_are_fake_polls/

Silver Bulletin / Nate Silver "“AI polls” are fake polls"
Reference material including discussions that AI opinion polls should be seen as "models" rather than "surveys," and reactions on X.
https://www.natesilver.net/p/ai-polls-are-fake-polls

LinkedIn / Damian Lyons Lowe・Survation related post
Check industry reactions emphasizing that synthetic data by AI is not a substitute for real humans, and accuracy and accountability are important in political and social surveys.
https://www.linkedin.com/posts/damian-lyons-lowe-33124421_crashed-activity-7453117988657983490-zkrb

Harvard Ash Center "Using AI for Political Polling"
Background material on the potential of AI to make political opinion understanding real-time and concerns about information quality.
https://ash.harvard.edu/articles/using-ai-for-political-polling/

Market Research Society "Synthetic Respondents in Market Research: Risk or Reward?"
Reference material on the risks of synthetic respondents, transparency, data authenticity, and the importance of real human data.
https://www.mrs.org.uk/blog/operations/synthetic-respondents-in-market-research-risk-or-reward

Springer AI & Society "The democratic ethics of artificially intelligent polling"
Reference material on the ethics, explainability, and democratic issues of AI opinion polls, including synthetic respondents and digital trace usage.
https://link.springer.com/article/10.1007/s00146-024-02150-4

MeasuringU "A Review of Experiments with Synthetic Users"
Reference material for a research review pointing out that while synthetic users or respondents may approach humans in averages, problems arise in details such as differences by attribute, correlation, etc.
https://measuringu.com/review-of-experiments-with-synthetic-users/