Real Conversations, Real Action

Beta

Call us Call Us (111) 234 - 5678

21/B, London Campus British Road,Birmingham, UK

Overcoming the Courtesy Bias in Constituent Feedback

When I was working for a large NGO, I spent part of my time trying to make the organization more accountable to local partners and beneficiaries. My task was to understand how well our accountability practices were working and to explore how to improve them. Many interesting things came out of our surveys and focus groups, but two things stuck in my mind.

First, it is difficult to interpret the ratings given by partners in the survey without comparisons. To put it simply, is a respondent who rates the organization 7 out of 10 satisfied or not?

Second, their feedback was excessively positive. In focus group discussions we asked participants how comfortable they felt giving us feedback, including criticisms and suggestions.  Their message was consistent: “you are our benefactors, we are grateful for any help provided, we would never criticize you”. An inconvenient truth for an organization that is trying hard to work from a rights-based approach…

Through our work in developing and implementing Constituent Voice feedback systems at Keystone, we have come across the so-called ‘courtesy bias’ in survey responses, which resonates with the experience I just described. The term ‘courtesy bias’ is used to describe the tendency for respondents to understate any dissatisfaction because they don’t want to offend the organization seeking their opinion. This trend is observed in all sorts of social research and is particularly marked where power relations are skewed, such as between a big NGO and its local counterparts.

Untitled1

Here I present three ways that we have identified to overcome the courtesy bias. These are not silver bullets, but do provide ways for minimizing the effects that the courtesy bias has on feedback data.

Independent surveying

Respondents are more likely to give honest and candid feedback if they are surveyed by an independent party. We carried out a test on this assumption with one of our clients, running two versions of the same household survey simultaneously across 12 villages in Tanzania, one conduced independently by Keystone and one direct by the organization. Respondents were selected randomly and the same introduction to the survey was given, promising anonymity of responses. Respondents were aware, however, when they were taking a survey directly from the client. The findings from this experiment were striking. For every question in which one could expect to see a courtesy bias (for example, assessing the fairness of the organization), there was 30 percent positive swing.

Net Promoter Analysis

Net Promoter Analysis (NPA)[1] uses 0-10 scales to classify respondents into promoters, passives and detractors and calculate a single net promoter score (NP Score), as illustrated in the figure below. In our experience, NP scores are more reliable measures than simple means.

Untitled2

NPA may appear almost too simple. But thousands of the world’s leading corporations have found it to be a reliable measure of customer loyalty, and a powerful lever for positive organizational motivation and change.

It may seem to some people that the division of respondents in these three groups is somewhat arbitrary. If we have for example a scale of “0- absolutely don’t agree” to “10- absolutely agree”, then why do we consider someone giving a rating of 6 as a detractor? What the customer satisfaction industry has learned over time and what we are seeing in our sector, is that this is a great way for neutralizing the courtesy bias. Empirical evidence shows that people giving ratings in the middle or just above the middle of the scale are normally understating their dissatisfaction. The converse is not shown to be true. Those giving lower scores are not understating their satisfaction.

Benchmarking

Comparing your ratings to those of other organizations doing similar work is another way to interpret your results that avoids the dangers posed by ‘courteous’ respondents. Where ratings are consistently high, it may be difficult to understand what the data actually means for your organization. You might find, for example, that 30% of your local partners are ‘promoters’ (i.e. give ratings of 9 or 10) when they are asked about the overall value they get from partnering with your organization. This data gets a whole new meaning if you are able to compare it against the average for other similar organizations. That is how you can see what is  ‘normal’ and if your score is truly low or high. You may be interested to know, for example, that the average portion of promoters for other organizations similar to yours is only 15%.

Have you come across any other ways for overcoming the ‘courtesy bias’ when gathering constituent feedback? If so, please use the comments space below to share your experience.

By Natalia Kiryttopoulou, Senior Consultant at Keystone Accountability


[1] ‘Net Promoter’ is a registered trademark of Fred Reichheld, Bain & Company and Satmetrix. For more see: www.netpromotersystem.com, as well as the open source net promoter community at www.netpromoter.com.

Print Friendly
Post Tagged with ,

One Response so far.

  1. jindra cekan says:

    Very helpful input on bias. That’s the difficulty international development projects can face when evaluating their projects’ effectiveness – bias. What I propose is using independent, national evaluators working with communities to evaluate projects’ sustainability in their communities.
    Too often everyone wants the project resources to continue so are hesitant to share the (un)intended impacts. I like the scaling as well.
    It’s their voices that are missing, unbiased :).
    Interested? See ValuingVoices.com/ blogs
    Cheers! Jindra

Leave a Reply

Your email address will not be published. Required fields are marked *


five − 4 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current ye@r *