What happened to Donald Trump? He was leading each of the last 10 Iowa polls but was defeated in the state’s caucus last week. His seven-point lead in polling averages amounted to a three-point loss. How were the polls so wrong?
It’s actually not so surprising. The polls have failed to predict many elections. Polls massively miscalculated Truman vs. Dewey in 1948, (Hilary) Clinton v. Obama in New Hampshire 2008, Netanyahu v. Herzog in 2015. The list goes on and on.
At Feedback Labs, we’ve talked about polling (see here, here, here, here, and here) as a means to collect constituent feedback. But while we’re impressed with its potential, we’re also wary of its risks. Polls seem to be able to represent public opinion yet time and again we see seemingly irrefutable polling data completely fall flat.
The case of deliberative polling from the Center for Global Development is an example of pretty good polling: in their study, they took a representative sample of citizens from all over Tanzania, brought 400 of them to the capital—Dar es Salaam—and helped them understand and voice how they think the government should spend windfall revenue. (Tanzania recently discovered natural gas reserves that may yield from $15 to $75 per capita in government revenue each year.) After the deliberative poll, which included videos, group discussions, and Q&A with experts, the following results were found:
- Tanzanians are more responsive to arguments against fuel subsidies
- Even after deliberation, voters have little interest in saving gas revenues for the long term
- They oppose leveraging gas to borrow overseas
Here’s another case of polling, only this one made me raise an eyebrow. This Center for Global Development event was entitled: “What are Africans’ Real Development Priorities? And What do they Think of Aid Agencies?” It discussed findings from a Pew Research Center poll about —as the title suggests—the concerns and priorities people have and what they think of government, foreign aid organizations, and businesses that seek to solve major problems in their countries.
It’s a fascinating subject, but can we trust this public opinion poll when so many other polls are wrong? How representative is their sample, really?
Let’s go back to some of the problems with polling in the American context. Election pollsters sample something on the order of 2,000 people out of the more than 200 million Americans who are eligible to vote. A typical response rate—number of people who take a survey as a percentage of those who were asked—is in the single digits today. The participation rate—the number of people who take a survey as a percentage of eligible population—is even lower.
To be sure, Pew’s face-to-face interviews probably resulted in higher response rates than the telephone polls in the U.S. Moreover, their survey of 1,000 people per county could in theory be nationally representative. Regardless, I still have some questions (please note I’m not a professional pollster):
1. Are the samples truly representative? Can you really survey only 9 countries and make the claim that “Health Care, Education are Top Priorities for Sub-Saharan Africa”? Last I checked, there are 54 countries in the continent of Africa and 48 countries in Sub-Saharan Africa.
2. Who is not responding and how are we counting (or excluding) them? A political pollster told me that for face-to-face interviews, surveyors go house-to-house until their sample size is achieved. That means that it’s quite possible that some people are not responding and we just move on without considering their input. The non-response may not be random so this, in turn, might skew what we really know about public opinion.
Take the question of gender. We know from the Pew Global website that they weight responses by gender but we don’t know the extent of the sampling error. For example, if you survey 100 people and nationally, the population distribution of the country is 50% male, 50% female, you’d want 50 of the people you survey to be male and 50 female. But if you actually get a breakdown of respondents that is 55 male, 45 female, you’ll weight your responses so that it adjusts to the actual population parameter. 55/45 doesn’t seem like a big problem, but the sampling error is larger and your respondents are split 80/20?
3. Will digital surveys cause fatigue? As people try to survey Sub-Saharan Africa by mobile phone more and more, will we run into lower response rates and greater digital divides? In other words, when will survey fatigue render the poll useless? If we use mobile phones we may be building on an existing technology divide. This reliance on mobile could exacerbate the inequality among voices that are heard. In a study that investigated the potential of mobile phone surveys in poor countries, there were some serious challenges to getting a nationally representative sample. In Afghanistan, while rural respondents accounted for nearly 60 percent of the sample, they were nationally under-represented by 20 percentage points. Additionally, the female population was under-represented by 28 percentage points. This latter finding is said to be due to fewer rural female respondents.
I’m not saying this low response rate is due to low rural female access to mobile phones alone. But we must consider the limitations of mobile if we continue surveying with technologies certain populations don’t use.
The power polls have in American politics (determining whether candidates can even be at a political debate, for example) is astronomical. Just because you don’t have a landline (the U.S. Telephone Consumer Protection Act of 1991 prevents autodialing of cell phones), should your voice not count? Or just because you don’t want to respond to a private, commercial pollsters’ call (for free), do you forfeit your civic participation?
Cliff Zirkin, past president of the American Association for Public Opinion Research, puts it bluntly:
“Election polling is in near crisis, and we pollsters know.”
Scott Keeter, director of survey research at the Pew Research Center—the same organization that tried to find out “what are Africans’ real development priorities?”—put it more gently:
“So accuracy in polling slowly shifts from science to art.”
A lot is at stake when we use polls to collect constituent feedback. On the one hand, we’re doing better than if we did nothing. But on the other hand, we’re likely hearing from a select few who are likely to not be too representative. If you limit the channels for people to give feedback to actual decision makers then you need to make sure that those channels, whether they are polls or other tools, are robust and fair.
Voices —especially of the poor—are easily co-opted. Grassroots organizing get astroturfed, young women’s aspirations get misrepresented by men, and the 41% of US households that have a cell phone but no landline (that are also younger and poorer households) are increasingly not polled. Polling is political: done well, it can help democratize aid and philanthropy. Done poorly, it makes feedback seem a waste.