At Feedback Labs, we’re constantly looking for learnings from outside our bubble that shed light on how to improve specific aspects of closed feedback loops.
Analysis, the third step of the feedback loop, is typically seen as a rather tech-heavy endeavor, especially if you have collected a lot of feedback data. Discovering or designing just the right algorithm or running your feedback data through the latest natural language processor is often portrayed as the best way to make sense of the feedback you receive. This can feel intimidating, especially if your organization doesn’t happen to have a trained data scientist on staff.
By what if we applied the ethos of relying on regular people to know what’s best for themselves to this Analysis step, as well? Using a filter of “peer review” as a primary analysis tool might allow the most promising ideas buried in feedback data to float to the top, making it easier to parse and more quickly act upon feedback – and get better outcomes, faster.
An example that has stuck with me this week comes from Ross Baird and Victoria Fram at Village Capital, in a post reflecting on the seeds of their success in the social entrepreneurship investment field. Village Capital has helped pioneer a new model of investment, partly by relying on entrepreneurs to lend their expertise and insight to a system of peer-review of investment ideas. And seven years after their founding, they boast an above-average hit rate.
Village Capital’s method mirrors that described in Justin Berg’s study on “creative forecasting” by circus professionals, which profiles the success of Cirque du Soleil’s process for creating, selecting, and marketing individual acts. As it turns out, neither Creators (performers) nor Managers (show producers) are very good judges of whether their own acts would resonate with an audience. From Adam Grant’s Originals:
“Berg found that Managers were often poor forecasters of which performances would be popular. They would choose new acts based on patterns of what the audiences liked last time — what we sometimes call “pattern recognition”. On the other hand, Creators were also poor forecasters of whether their own ideas would succeed: in general, humans have a cognitive bias that leads us to overestimate our own ideas and abilities relative to others’.
It turned out that the best evaluators of a Creator’s new idea were other Creators. With a finely tuned risk tolerance developed from years of their own failures, an understanding of how to make an impression on the audience, and a higher likelihood of respect for constructive, honest criticism from a fellow Creator, they were able to more effectively forecast the success of new performances.
Yet Creators are rarely in a position to decide which ideas get a shot.”
Village Capital applied this insight to their investments, and found the wisdom of entrepreneurs (Creators, in a sense) a much better predictor of good ideas than any outsider expert analysis.
How can we apply this to the analysis of feedback? For small organizations with lots of feedback data to crunch through, one idea might be using the feedback respondents themselves to help you sort that data into highest priority buckets. Applied correctly and fairly, this could offer an efficient, high quality, scalable analysis tool for any nonprofit, and provide an opportunity to engage constituents in a more deeply co-creative process.
Have you seen peer review work in this context? Let us know! Reach us at [email protected].