At Feedback Labs we have been working to address data interoperability challenges within our sector. Last year, Marc Maxmeister conducted a baseline study to gather a rough understanding of our influence on the universe of organizations interested in feedback. This blog post highlights another mechanism that Marc used to collect data on the variying perspectives between donors and beneficiaries. It was originally published by Keystone Accountability, and is republished here with permission of the author.
We are excited to announce a new stream of LabStorms this year: “DataStorms” will be open to any organization interested in communicating data between platforms, especially data that helps close feedback loops or rewards others for doing so. Interested? Email Joanne at firstname.lastname@example.org for more details!
“We don’t see things as they are, we see things as WE are.” – Anais Nin once said. When this power of personal perspective is used to transform the world we call it “vision”. But it can also color the lens through which we see the world. In that case, we call this misperception “bias”.
Observer bias is the reason evaluators and analysts cling to the clarity of numbers, sums, averages, and error bars. But statistics won’t save you from a lie if the assumptions behind the tests go unexamined.
At Keystone Accountability, we rely on contrasts between groups and comparisons among similar organizations (called benchmarking) as one way to discover this bias. Also, large sample sizes across many contexts help. To obtain these, we work with others, such as GlobalGiving.
Here is what we found when we compared two perspectives on hundreds of organizations. In the first case, GlobalGiving donors received a report from an organization they’d previously given to and were asked, “How likely are you to recommend this organization to a friend or colleague (0-10)?”
In the second case, people who lived in communities in East Africa were asked to talk about a time when a person or organization tried to do something to help someone or change something in their community. What happened? Some of these were also asked the same “recommend” question of the organization they chose to talk about. We used the responses to this question from those who answered it to impute scores for everyone who told a story and answered similar questions with positive or negative outcomes.
Afterwards, I used the mission statements or story texts to categorize all organizations by the type of work they performed. I presented both groups side-by-side at the recent Feedback Labs Summit. Scores can range from (horrible) -100 to +100 (perfect).
Not all types of aid work are equally satisfying to the people served, nor to those who donate. Donors are more likely to recommend an organization than are the people served, by about 10 to 15 points across the board. But two of these eight categories win the prize for the biggest discrepancy. Donors view social investors (e.g. microloans) negatively, but beneficiaries do see their value. In fact, beneficiaries value the work of social investors more highly than all other types of work! That deserves more follow up, which you can do on storylearning.org – a site I created and maintained for that sort of thing.
The other big contrast is that beneficiaries see more value in advocacy organizations than do their own donors. Cooperatives and self-help groups also get strong praise from their members, as well as from their donors.
There’s a lot to unpack here, and the details are in the, er details. Every score has comments associated with it. Even pages of narrative, in the case of the storytelling data.
This data is part of our benchmarking tools in the FeedbackCommons. If you use our one question feedback survey, featuring the question “how likely to recommend…” and ask some people to answer it, it will email you the results and you can compare your organization with these benchmarks.