The Spread of Best Practices: The Supply Side

The supply-siders still dominate the debate about quality and accountability in humanitarian aid, but those pushing for more focus on ‘voice’ now get a more attentive hearing. Let me elaborate. Recently I joined some of the most thoughtful people in the humanitarian community to talk standards and certification—and how these can better drive quality, effectiveness and accountability in humanitarian work.

[dropshadowbox align=”left” effect=”lifted-both” width=”275px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” rounded_corners=”false” ]

“Is the humanitarian aid system, which has grown into a sprawling $17 billion a year industry, any better at serving the needs of those it seeks to support than it was?”

[/dropshadowbox]

The forum on humanitarian standards, which was held in Geneva with support from the Swiss Agency for Development and Cooperation (SDC), is part of a continuum of efforts that goes back to the Rwanda genocide in the mid-1990s when the inadequacies of the international system in protecting civilians marked a tipping point in humanitarian operations. Things had to change and the humanitarian aid organizations embarked on a process of professionalization that, between 1997 and 2003, spawned a range of quality and accountability initiatives. These include the SPHERE project, the Humanitarian Accountability Partnership, the Active Learning Network for Accountability and Performance (ALNAP), and the Good Humanitarian Donorship initiative—among others.

Some 15 years down the road, the big question is whether these supply-side initiatives, by distilling best practices and determining whether aid organizations are capable of acting on them, have improved humanitarian outcomes. In other words, is the humanitarian aid system, which has grown into a sprawling $17 billion a year industry, any better at serving the needs of those it seeks to support than it was?

 

Measuring Improvement

The assumption is that the various quality and accountability initiatives (and the growth in monitoring and evaluation) have raised the game of the humanitarian community in dealing with what one participant at the Geneva meeting described as the ‘messiness’ of humanitarian action. But how do we determine the kind of difference these initiatives make? According to Jane Cocking, head of humanitarian practice at Oxfam GB, there are three things that count in humanitarian programs: speed, relevance, and accountability to affected people.

Ground Truth

Speed is hard to achieve, but easy to measure. Relevance and accountability to beneficiaries are more challenging—at least in the context of conventional ways of assessing performance. My main takeaway from the two days in Geneva is that the best way to gauge whether humanitarian programs are making a difference is to ask the intended beneficiaries. I guess this is no surprise in light of the fact that I lead Ground Truth, a program in Keystone Accountability that is testing a new cut-through approach to accountability and performance management based on the beneficiary perspective.

 

Hearing The Beneficiary Perspective: The Demand Side

Ground Truth’s starting point is to ask very few questions (never more than 5) and to do so often. In Haiti last month, using focus groups and surveys, we heard direct from a subset of the 300,000 beneficiaries who remain in temporary camps. The main message is that they are pleased to be asked their opinion. In the 3.5 years since the earthquake, no one has inquired if the aid they receive is relevant to their needs, if its quality is adequate, or if they trust those in charge. And they’ve had no opportunity to express their own ideas about how to exit the camps and get back on their feet.

Working with our operational partners in Haiti, we are now bringing the beneficiary perspective to center stage as new programs focus on closing some of the remaining camps. For the aid agencies, it is a shift away from the anecdotal, narrative approach to listening. Our survey methodology is systematic, using identical survey techniques across representative samples of the beneficiary population. This provides an opportunity for continual improvement as agencies make course corrections based on frequent rounds of feedback. The average cycle for data collection in our pilot projects is 3 months, but this needs to shorten further. Participating agencies commit to expediently share with beneficiaries the feedback data received from Ground Truth and to explain what they propose to do about it. When feedback leads to action, participation rates go up—as does the quality of what beneficiaries say.

It is not an either/or proposition. Knowing what to do in a humanitarian emergency and validating an agency’s competence to do it right are important. But if beneficiaries are the unit of account in humanitarian programs, which no one questions, we need to start systematically engaging with them in a continuous dialogue. Unless we do so, it is hard to verify whether supply-side initiatives have the desired impact. We also miss out on a perspective that aid agencies can accommodate, and donors can factor into their funding decisions. Most importantly, it gives people caught up in the messiness of disaster response the sense of dignity that is one of the guiding principles of humanitarian action.

by Nicholas van Praag, Ground Truth Director, Keystone Accountability

www.keystoneaccountability.org/services/groundtruth

Leave a Reply

You must be logged in to post a comment.