By Renee HoFebruary 11, 2016

Share this:


donald trump

 

What happened to Donald Trump? He was leading each of the last 10 Iowa polls but was defeated in the state’s caucus last week. His seven-point lead in polling averages amounted to a three-point loss. How were the polls so wrong?

It’s actually not so surprising. The polls have failed to predict many elections. Polls massively miscalculated Truman vs. Dewey in 1948, (Hilary) Clinton v. Obama in New Hampshire 2008, Netanyahu v. Herzog in 2015. The list goes on and on.

At Feedback Labs, we’ve talked about polling (see here, here, here, here, and here) as a means to collect constituent feedback. But while we’re impressed with its potential, we’re also wary of its risks. Polls seem to be able to represent public opinion yet time and again we see seemingly irrefutable polling data completely fall flat.

The case of deliberative polling from the Center for Global Development is an example of pretty good polling: in their study, they took a representative sample of citizens from all over Tanzania, brought 400 of them to the capital—Dar es Salaam—and helped them understand and voice how they think the government should spend windfall revenue. (Tanzania recently discovered natural gas reserves that may yield from $15 to $75 per capita in government revenue each year.)

After the deliberative poll, which included videos, group discussions, and Q&A with experts, the following results were found:

  • Tanzanians are more responsive to arguments against fuel subsidies
  • Even after deliberation, voters have little interest in saving gas revenues for the long term
  • They oppose leveraging gas to borrow overseas

 
Here’s another case of polling, only this one made me raise an eyebrow. This Center for Global Development event was entitled: “What are Africans’ Real Development Priorities? And What do they Think of Aid Agencies?” It discussed findings from a Pew Research Center poll about —as the title suggests—the concerns and priorities people have and what they think of government, foreign aid organizations, and businesses for helping to solve major problems in their countries.

It’s a fascinating subject, but can we trust this public opinion poll with so many other polls being wrong? How representative is it, really?

Let’s go back to some of the problems with polling in the American context. Election pollsters sample something on the order of 2,000 people out of the more than 200 million Americans who are eligible to vote. A typical response rate—number of people who take a survey as a percentage of those who were asked—is in the single digits today. The participation rate—the number of people who take a survey as a percentage of eligible population—is even lower.

To be sure, Pew’s face-to-face interviews probably resulted in higher response rates than the telephone polls in the U.S. Moreover, their survey of 1,000 people per county can, in theory, be nationally representative. But I still have some questions (please note I’m not a professional pollster):

1.Can you really survey only 9 countries and make the claim that “Health Care, Education are Top Priorities for Sub-Saharan Africa”? Last I checked, there are 54 countries in the continent of Africa and 48 countries in Sub-Saharan Africa.

2.Who is not responding and how are we counting (or excluding) them? A political pollster told me that for these face-to-face interviews, surveyors go house-to-house until their sample size is achieved. That means that it’s quite possible that some people are not responding and we just move on without their voice. The non-response may not be random so this, in turn, might skew what we really know about public opinion.

Take the question of gender. We know from the Pew Global website that they weight responses by gender but we don’t know the extent of the sampling error. For example, if you survey 100 people and nationally, the population distribution of the country is 50% male, 50% female, you’d want 50 of the people you survey to be male and 50 female. But if you actually get a breakdown of respondents that is 55 male, 45 female, you’ll weight your responses so that it adjusts to the actual population parameter. 55/45 doesn’t seem like a big problem but what if it’s 80 male, 20 female?

3.As people try to survey Sub-Saharan Africa by mobile phone more and more, will we run into lower response rates and greater digital divides? In other words, when will survey fatigue render the poll useless? And, if we use mobile phones, are we just building on an existing technology divide and maybe even exacerbating the inequality among voices that are listened to?In a study that investigated the potential of mobile phone surveys in poor countries, there were some serious challenges to getting a nationally representative sample. In Afghanistan, while rural respondents accounted for nearly 60 percent of the sample, they were nationally under-represented by 20 percentage points. Additionally, the female population was under-represented by 28 percentage points. This latter finding is said to be due to fewer rural female respondents.

 
I’m not saying this is due to low rural female access to mobile phones alone. But this is a question to consider if we continue trying to “get voice” with technologies certain populations don’t use as much.

The power polls have in American politics (determining whether candidates can even be at a political debate, for example) is astronomical. Just because you don’t have a landline (the U.S. Telephone Consumer Protection Act of 1991 prevents autodialing of cell phones), should your voice not count? Or just because you don’t want to respond to a private, commercial pollsters’ call (for free), do you forfeit your civic participation?

Cliff Zirkin, past president of the American Association for Public Opinion Research, puts it bluntly:

“Election polling is in near crisis, and we pollsters know.”

Scott Keeter, director of survey research at the Pew Research Center—the same organization that tried to find out “what are Africans’ real development priorities?”—put it more gently:

“So accuracy in polling slowly shifts from science to art.”

If we use polls for collecting constituent feedback, a lot is at stake. On the one hand, we’re doing a whole lot more for getting constituent feedback than if we did nothing. But on the other hand, we’re likely hearing from a select few who are likely to not be too representative.

If you limit the channels for people to give feedback to actual decision makers then you need to make sure that those channels, whether they are polls or other tools, are robust and fair.

Voices —especially of the poor—are easily co-opted. Grassroots organizing get astroturfed, young women’s aspirations get misrepresented by men, and the 41% of US households that have a cell phone but no landline (that are also younger and poorer households) are increasingly not polled. Polling is political: done well, it can help democratize aid and philanthropy. Done poorly and we might not find feedback being the right thing to do, after all.

3 Responses to “Does polling make for good feedback?”

  1. d

    February 16, 2016

    While I don’t disagree that surveys can be a move in the right direction, it really depends what you do with them. Even a survey that is inaccurate in the eyes on an omniscient being can be used in an process of analysis, dialogue and co-creating solutions that leads to learning and improvement. And a “perfectly representative” survey can make no difference to anyone. And in defense of the much maligned “complaint box” (I prefer “suggestion box”), I would suggest that this is an instrument that is wonderfully well fit for purpose, and has done more good in the world than its more popular cousin, the survey.

    Reply
  2. Renee Ho

    February 15, 2016

    Hi Bill,

    Thanks for your comment– you make some great points. I absolutely agree that with a good sample size, the number of people is itself is not a problem. But perhaps I still misunderstand your point about response rate?

    My concern about low response rate is that it can create bias– that the sample won’t be representative. I get that we try to correct for “non-response bias” by giving more weight to the demographic groups that are less likely to respond. But to my point above, it’s good to know what that response rate is.

    Then, say we have 4 variables like age, gender, education level, income. We get a bunch of non-responses so we keep going to more numbers/households until we fill our quota for these variables. But then, it turns out there’s a 5th variable like ethnicity which might explain for the non-response. In this case, isn’t it that the responses–even having met the sample size quota– are biased because they don’t include people from the unresponsive ethnicity?

    Agree with your last point that surveys are a move in the right direction! It’s just good to be aware of both what they reveal and/or potentially mask.

    Reply
  3. Bill Savedoff

    February 15, 2016

    Hi Renee,

    I’m glad you’re questioning polling, but I think your blog is a little mixed up. The number of people who are polled in a survey and even the response rates are not, in themselves, a problem for getting accurate percentage views. The most important thing is that the resulting sample is not systematically biased.

    The rest of your blog does address the latter point – are 9 countries representative of Africa? What if people with cell phones are overrepresented? Those are questions about the bias in the survey, not about the size.

    But I think the more important question for Feedback Loops is “When does a survey give you useful feedback?” A health facility with a complaint box is likely to get only a handful of complaints from the most disgruntled and least inhibited patients; whereas a random questionnaire for clients exiting a facility is likely to give a much better picture of the range of experiences. The survey may still not be representative, but it is a move in the right direction.

    Reply

Leave a Reply