Reposted from Charity Navigator‘s President and CEO’s March 2014 report
A guest post from David Bonbright, CEO of Keystone Accountability
It is time for another walk through the new Results Reporting dimension of the Charity Navigator evaluation system, and since this time we are looking at Constituent Voice, element four, Ken has asked me to “take the pen”. I am a co-founder of Keystone Accountability, a nonprofit consulting firm specializing in Constituent Voice. We were consultants to Charity Navigator in developing its approach to results reporting.
Constituent Voice combines two simple ideas. The first idea is at the epicenter of helping others: Listen. Really listen, as in be ready, willing and able to change in response to what we hear. The second idea is a specific application of basic organizational management: To be systematic in our listening. It is of limited use to an organization if its people on the frontline are great listeners, and yet their knowledge is not captured and transformed into organizational knowledge and culture.
In its approach to Constituent Voice, Charity Navigator is trying to understand whether and how well charities are living out these two ideas. The six yes/no questions that Charity Navigator is using to assess Constituent Voice practices are deliberately set to the minimum that any charity should meet to be confident that it is responsive and effective in its programming.
1) Does the charity publish feedback data from its primary constituents?
By publishing feedback data, a charity enables primary constituents and the giving public to “see for itself” what constituents think of the organization.
2) Does the published feedback data include an explanation of how likely it is to be representative of all primary constituents?
Without this, we have no way of knowing if the published feedback has been selected to make the organization look good. Note, we anticipate that once we determine how we will rate the results reporting dimension, we anticipate that an organization may get full credit for this question if it is transparent about the selectivity of what it publishes. I think of this as akin to the Surgeon General’s warning regarding cigarettes. Charity Navigator may post a warning for these organizations that says, “This organization’s published constituent feedback is likely to be doctored to make the organization look more responsive to constituents and effective than it actually is.”
3) Does the data include an explanation of why the organization believes the feedback is frank and honest?
Similarly, we know that the primary constituents of organizations have many reasons not to be frank and honest in their feedback – for example fear of retribution. We anticipate that an organization may get full credit for this question if it is transparent about how it addresses this possible bias in its feedback data.
4) Is that data presented in a way that shows changes over time going back at least one year?
Part of being systematic with respect to feedback is tracking responses to the same question asked the same way over time. Time series feedback data is very useful for interpreting and acting on feedback. Charity Navigator will be careful not to penalize organizations when they first report for not having prior year data. As time goes by, we expect to increase the time period to report historical data to at least three years.
5) Does the data include questions that speak to the organization’s effectiveness?
There is little point in publishing answers to trivial questions like, “Are you satisfied with the color scheme on our website?”, to the Constituent Voice requirements. At least some of the published feedback needs to speak directly to organizational effectiveness. For example, “On a scale of 0 to 10, to what extent is it worth your while to engage with this organization to make it work better for you and your family?”
6) Does the organization report back to its primary constituents what it heard from them?
This is Psychology 101. If you do not tell your survey respondents what you learned from them, and what you are going to do with it, why would they take your next survey seriously? How could they hold you to account? How would you know if you had understood the original feedback correctly? How would you know if your proposed responses were likely to succeed?
My own experience is that answering these six questions should not tax a competent organization. But more importantly, when an organization takes Constituent Voice seriously, amazing things start to happen. I will talk more about the benefits of rigorous Constituent Voice practice in another blog contribution, but to see how one human services organization, LIFT, is making extraordinary discoveries with it, click here.
Finally, there are some great resources out there for charities that want help – free and otherwise – to cultivate Constituent Voice. Our website collects a few of these and I recommend our recent Technical Note on Constituent Voice methodology. Last year a group of Constituent Voice specialist organizations banded together to form The Feedback Labs as a place where tools, services and knowledge on constituent feedback comes together. One of its early free services is The Feedback Store, a searchable online catalogue of feedback services providers.
Good luck with your Constituent Voice endeavors. Remember, when you listen, listen systematically!
Best regards,
David
David Bonbright is founder and CEO of Keystone Accountability, focused on helping organizations develop new ways of planning, measuring and reporting on their results by incorporating beneficiary and other constituent feedback.