Fariha Raisa, Feedback Labs | July 8th, 2022
As the practice of feedback becomes more prevalent in the non-profit sector, organizations are recognizing the significance of incorporating community feedback into their programs. More feedback collection allows organizations to uplift communities by including their voice in the decision-making process. Someone new to the feedback space might then wonder how exactly we can initiate this feedback process and if there exists a replicable model to analyze the collected data. To brainstorm ideas for developing such a model, the Inter-Agency Standing Committee (IASC) presented in a LabStorm to standardize feedback practices across the humanitarian space.
The IASC is an inter-agency forum of UN and non-UN humanitarian partners. Its workstream is co-led by the International Federation of Red Cross and Red Crescent Societies (IFRC) and the World Food Programme (WFP). It aims to improve the ability of humanitarian organizations to collectively respond to the priorities, suggestions, and concerns of people affected by crises by addressing some of the biggest challenges around community feedback. These challenges include a lack of sharing and analysis across channels and agencies, and hence a lack of collective and systematic action to address feedback.
Currently, there is no systematic approach to how feedback data is managed, shared, and used to improve humanitarian programs. Unlike a traditional customer service mechanism that focuses on closing tickets and improving organizational performance, a community feedback mechanism (CFM) should enable diverse actors, including the community, to come together, look at trends and identify gaps and opportunities to improve humanitarian assistance overall. Hence, the discussions in the session revolved around actionable ideas for a more systematic and inclusive approach to feedback analysis.
- Categorizing feedback for a faster resolution process. Attendees emphasized the operational aspect of responding to feedback and suggested focusing more on operational thinking than finding perfect categories. A starting point would be identifying the elements that make it possible, easy, and rewarding for staff to actually look at, consider, and respond to feedback. When seeking to make processes faster based on purely qualitative feedback, participants also cautioned against the tendency to respond rapidly without formulating and testing hypotheses for different types of feedback.
- Coding qualitative community feedback. The use of machine-learning algorithms was suggested to follow a heatmap of words and generate a keymap so that emphasis can be placed on the issues at hand over the categorization of community feedback. Furthermore, machine learning can have the tendency of fencing respondents into “tracks” based on how the majority have historically responded, drowning out nuance. The usage of customer satisfaction surveys was also deemed as a powerful tool to encode qualitative feedback. Lastly, it was recommended to hold weekly meetings to ensure the technology is working right since it is designed for maximum flexibility.
- Monitoring the impact of feedback mechanisms. With smaller-scale feedback efforts, it might be worthwhile to simply ask the respondents to what extent the feedback process has improved their relationship or experience with the program. The results from this question can facilitate a dialogue about the extent to which participants have seen course corrections related to their feedback. Attendees also suggested that while monitoring impact is important, it is necessary to ensure that other feedback loop stages are happening. Staff needs to respond to the feedback and make meaningful changes as well. In coming up with an indicative question to monitor impact, it is important to realize the three facets of the statement– the question being asked, the messages that may be implicit in the questions, and the big downstream purpose to this question.
With many actionable ideas on the table, IFRC recapped the session with their next step of strengthening discussions with the right stakeholders to respond to feedback. The object would be to develop community feedback data standards to enable safe aggregation and analysis of feedback from different sources.
Learn More About LabStorms
LabStorms are collaborative problem-solving sessions designed to help organizations tackle feedback-related challenges or share what’s working well in their practice.
Presenters leave the experience with honest, actionable feedback and suggestions to improve their feedback processes and tools.
To learn more about participating in a virtual LabStorm, please visit feedbacklabs.org/labstorms.