How can the feedback field recognize, celebrate and support nonprofits that are improving how they listen to the people at the heart of their work?
The Irritants for Change (often called the Irritants for short) are committed to developing shared incentives for nonprofits and philanthropy to listen and act on feedback. They are a network of feedback specialists and nonprofit rating systems that ‘irritate’ the nonprofit sector to encourage more and better listening.
The Irritants collaborate to develop tools to encourage nonprofits to reflect on and improve how they listen. For example, in 2018, they developed the Core Principles of Constituent Feedback, and in 2019 they launched How We Listen, a feedback self-reflection that US-based nonprofits can fill out on GuideStar by Candid. How We Listen starts with a simple question: Does your organization gather feedback (perceptions, opinions, ideas, or concerns) from the people you serve? It then asks nonprofits to reflect on how they collect and use feedback and the challenges they encounter. Nonprofits that use How We Listen to reflect on their listening practice earn higher ratings in the Culture & Community beacon of Charity Navigator’s Encompass Rating System. In the first two years it was live, almost 17,000 nonprofits filled out How We Listen!
Now, the Irritants are hoping to build on the success of How We Listen to find ways that the feedback field can discern the quality of nonprofit listening that doesn’t rely only on self-reporting. In previous brainstorming sessions led by Candid and Charity Navigator, nonprofits expressed interest in peer-based mechanisms that can help them assess their current listening practices, identify steps they can take and resources they can use to improve how they listen and gauge how effective those practices are. At the LabStorm, Berilee Moussata and Alexis Banks presented two different paper prototypes for such a process, dubbed “Peer Success Reviews.”
- Model #1 is based on the Philippine Council for NGO Certification’s volunteer peer evaluator model. Peer evaluators review documents on a nonprofit’s organizational listening practice and conduct interviews with employees from the nonprofit, and produce an assessment of their learning practice and how they could improve.
- Model #2 centers on regular calls with a cohort of peers. Nonprofit staff meet quarterly with peers from about 10 other nonprofits to report on how they’ve improved their listening practice in the last quarter. During this meeting, peers share advice and give a thumbs-up rating if they feel the nonprofit has improved its listening over the past quarter.
It is critical that both models meaningfully reflect nonprofit listening practice, are scalable to hundreds of thousands of nonprofits per year, and support and encourage nonprofits on their listening journey rather than feel punitive. So we asked LabStorm attendees which model they thought was more aligned with those goals and how we could improve on these initial ideas. Here’s what LabStorm attendees had to say on which prototype would be most effective at motivating and supporting nonprofits to improve how they listen and would draw hundreds of thousands of nonprofits to participate.
A relationship-centric model is preferable. LabStorm attendees like the second model better because it focused more on peer relationships than on document evaluation. Attendees emphasized that larger nonprofits with established structures and more resources resources for listening may have an advantage under Model #1, whereas smaller nonprofits may be excluded. Furthermore, LabStorm participants highlighted that providing documentation and engaging in interviews would require significant staff time. Overall, attendees appreciated the opportunity for iterative learning and exchange that Model #2 offered. Cohorts would get to know one another, and attendees affirmed that this would be a more appealing option, especially for nonprofits whose listening practices are still developing and not yet formalized or institutionalized.
Participants would like an assessment rubric. LabStorm attendees thought we should replace the thumbs-up system in Model #2 with an assessment rubric to establish a standard rating scale and strengthen feedback from participating nonprofits. LabStorm attendees also proposed allowing cohort members to vote on rubric details since success criteria could change if peers are grouped by topic area rather than randomly with peers from intentionally different sector areas. Attendees asserted that this element would allow the peer assessment model to go beyond peer learning and more closely resemble a community of practice, complete with synchronous learning opportunities, resource sharing, and even co-development of open-source strategies.
The incentive to participate should be clear. Both models require nonprofits to devote time to the process. LabStorm attendees would want to be in a cohort where everyone was dedicated to attending and providing honest feedback. Peer success reviews should be a safe forum for sharing, and participants in a cohort should make noticeable progress between sessions (rather than completing all of the work within the session) with built-in accountability. If a nonprofit shows progress on improving how they listen in the peer forum they could get higher ratings on platforms like Charity Navigator. LabStorm attendees thought we could consider offering additional incentives like being profiled to funders, or financial compensation to cover staff time to participate. Attendees pointed out that since many project-based nonprofits use billable hour models, it would be difficult for specific nonprofits to commit to the whole process otherwise.
It was great to hear all the ideas and advice on how we can draw on the power of peers to motivate and support hundreds of thousands of nonprofits to improve how they listen. In 2023, the Irritants will continue workshopping these concepts and find ways to pilot them. If you’re interested in being involved, contact Megan via email at [email protected]!
Berilee Moussata is the Fall 2022 Strategy Intern at Feedback Labs. She is responsible for helping Feedback Labs collaborators to encourage organizations to do a better job of listening and responding to feedback. She will collaborate with leaders in the field to scale rewards for organizations that have feedback practices. Berilee studied Public Health at the University of Colorado and is currently completing a postgraduate degree in Global Health.
She is passionate about global health policies that ensure underserved populations have access to adequate healthcare services to promote healthy living. She regularly participates in community engagement projects to build trust, connections, and collaborations between communities and research institutions. Berilee has contributed to numerous projects on disease control health programs throughout Latin America, the Caribbean, and Africa.
Previously, she served as a disease intervention specialist for a local health department in Colorado where she performed routine case investigations and fostered a data-driven culture by collaborating with team members to deliver easily accessible, strategic, and actionable data that fueled organizational decision-making and performance.