Bore Your Clients with Endless Surveys

The cornerstone of measuring customer satisfaction in the business world is the Net Promoter Score (NPS), which is derived from how people answer this question:

NPS Calculation
On a scale of 1 to 10, how likely are you to recommend Company X to your friends and family?

The NPS is the percent of customers that rate you as a 9 or 10, minus the percent that rate you 6 or less (7s and 8s are ignored). As you can see by the ranges associated with these three categories, people are more likely to give a high score even when they don’t recommend you, and only people who give 9s and 10s are true promoters of your organization. Anything below a 7 means people are more likely NOT to recommend you to others.


NPS Ranges by Industry


To understand the context of NPS rankings, take a look at the ranges by industry as benchmarked by Satmetrix. Not surprisingly to many of us, health insurance companies, Internet service providers, and life insurance companies rank among the lowest of anybody. While the NPS is most commonly used in the private sector, similar estimates have been calculated for government agencies, ranging from 52 for the CDC to -15 for the IRS (which still places it above lawyers and airlines in general opinion polls). Using public opinion polls and the NPS formula, it’s estimated that the U.S. Congress would be around -60. However, these opinion polls did not use the same 10-point scale for ratings, so they are simply estimates. Having a gauge of both public- and private-sector providers can help us understand where service-sector organizations should also fit into the mix.


False Positives

GlobalGiving recently added the NPS to their Storytelling Project do-it-yourself form builder. It is one of 35 optional questions to choose from in the pool:

marc img4

And from this one question it would be possible to benchmark beneficiary satisfaction with the work of local non-profits, in spite of the positive bias in every answer. Keystone Accountability, a London-based non-profit focused on evaluation and analysis to increase the effectiveness of social service organizations and a member of Feedback Labs, bases its business around this idea. Its founder David Bonbright spent years studying JD Power & Associates’ model for benchmarking car quality and adapting it to the nonprofit world.

In 2010 Keystone offered nonprofits a free client satisfaction benchmarking tool, which GlobalGiving has been using with our hundreds of partner organizations each year. In fact, according to David, we may be the only nonprofit that has continued to use this tool with NGO clients every year since the tool was created.

At first glance, the results show stunning improvement in GlobalGiving’s partner organization satisfaction:

marc img5

In 2010 GlobalGiving’s clients appeared to be less satisfied with us than other clients were satisfied with other organizations in our category (funding sources). We scored a 7.41 on average compared with 8.27.

However in 2011 the trend reversed. We jumped to 8.79, then to 9.02 and 9.00 in 2012 and 2013 respectively. What happened? Did we suddenly become that much better? We’d like to believe that, of course. But the numbers were too good to be true. There’s another more likely reason behind the shift, which we discovered by asking the net promoter question to the same clients on two surveys in the same year.


Longer Surveys Dissuade Detractors From Giving Feedback

The real story here is that for the last three years we’ve been sending all of our partner organizations an annual survey in wufoo with about 30 questions. At the end of this survey people are invited to continue on to a second, anonymous survey administered by Keystone. Most people do not. Those that do are twice as likely to be net promoters of GlobalGiving as they are to be detractors. We happened to ask the same net promoter question on BOTH surveys, which is what gives us the real insight into the ‘bore away the bad news’ effect. Our net promoter score appears to be 36 from the first wufoo survey, or 71 when calculated from the subsequent Keystone survey.


Survey Part 1: GlobalGiving

# of Respondents

Survey Part 2: Keystone

# of Respondents











Not Asked


So that jump in net promoter score from 2010 to 2011 is more likely related to the fact that more people total answered the question on the shorter 2010 survey than the longer 2011 survey – including more detractors.

Since 2011, when we asked the question twice, we always see a lower score when a larger proportion of our client base answers the NPS question. But even so, each year that score has improved.

I’m not saying that our 2013 score of 36 is bad. It places us somewhere between TripAdvisor (NPS score = 33) and Costco (NPS score = 71). However, I believe we must be ahead of much of the NGO world on client (in our case the clients are partner NGOs) satisfaction, because we’ve asked this question every year for four years, and in that time, only four other organizations in our category (funding organizations) bothered to use this free feedback tool at all. Organizations that don’t ask if clients are satisfied tend to have much lower scores than those who do. And we definitely care a lot about what our partners think of us.

And, of course, we also ask our donors for feedback. Our donor net promoter score is 45, in case you were curious.


Shorten Your Surveys!

Discovering this ‘bore away the bad news’ effect within our own data makes a strong case for why we should all use shorter surveys. Facebook is the ideal marketing survey; it has only one question, and people answer it several times a day.

GlobalGiving’s do-it-yourself storytelling form is longer, but we intentionally limit the length to what will fit on the front and back of one sheet of paper because we don’t want the length of the questionnaire to limit who we hear from. Because as good as having a higher score makes us feel, hearing from our detractors is the key to organizational progress.

Marc Maxson is an Innovation Consultant at Global Giving and a PhD neuroscientist. He was formerly a Peace Corps Volunteer in The Gambia (1999-2001) and did a Fulbright research project around the impact of computers and the Internet on rural education in West Africa.


Leave a Reply