Development Gateway brought the latest phase of their work with the Results Data Initiative to Feedback Labs’ collaborative brainstorming sessions, called LabStorms. Here’s a recap of what happened.
The international development community spends a great deal of time, effort, and money gathering results data. Ideally, these data comprise outputs (school enrollment, immunization figures), program outcomes (educational attainment, disease prevalence), and, in some cases, impacts (changes in key outcomes over time). But, do they actually contribute to sustained impact?
Aid initiatives rely on results data to determine which programs and partners allocate their resources most effectively. Foundations invest millions of dollars into collecting this data. Governments make decisions based on representative results data. That’s enough to determine who has the greatest potential to achieve long-lasting success, right?
Maybe. But there are some serious and consequential improvements that need to be made to the international system of results data collection for the full potential of results data to be unleashed. Is the data collected reliably being used to make smart decisions to increase impact? What about local actors who collect data for these large-scale initiatives and never hear how the data was used in budgeting and program decisions? Is the data useful at the local level for improving management and impact?
The international development community requires a fresh approach to results data to genuinely increase development impact – starting with local decision-makers – in priority areas.
Development Gateway (DG), with the support of the Bill & Melinda Gates Foundation, “diagnosed” the results data ecosystem in three countries – Ghana, Tanzania, and Sri Lanka – to explore these questions. Focused in health and agriculture sectors, the Results Data Initiative identified ways to ensure the right users are able to collect, share, and use information to maximum effect.
Through hundreds of conversations with users in these data ecosystems, it became clear that one of the biggest barriers to usability of the data (at the national but especially local level) was usefulness of the data. Data points that funders and donors thought would be useful in program management don’t often end up fulfilling the data needs of program implementers on the ground. Additionally, when foundations invest in outcome data on the national level, impact and lessons learned are not often disaggregated at the local level, making it difficult to see the sustained impact that many believe results data could bring.
Development Gateway finds that it is crucial to identify and fix the broken link between collecting results data and using it. Once evidence is collected, how can governments and development organizations ensure the data gets used – and effectively – by the right people?
1. Create Incentives to Use the Data
Using results data is not incentivized and therefore rarely happens. Local actors collecting the data are not explicitly recognized or rewarded for their work, nor are they required (or even encouraged) to analyse it. It doesn’t factor into job performance reviews and there’s no process for utilizing the data. This separation between action and outcome almost disincentivizes any investment in the process.
As explained by a Regional Medical Officer in Sri Lanka, “There is not much analysis done by us. Usually, collating is done to get the totals for the district level data…no specific software is available and also such analysis is not expected from us, as a duty.”
If the use of data is not required, or if next steps are unclear, why would an implementer do anything with it? Local governments, NGOs, and funders working with local initiatives would benefit from being more cognizant of and proactive about the incentives in place for implementers to analyze and use results data (or not).
Feedback Labs’ member organization Accountability Labs does just this by hosting “Integrity Idol,” where individual government officials are celebrated for their successes and provided the tools to enhance their measurement systems. Can a shift from simply demanding results data to instead celebrating its use make the data more relevant, and hopefully increasingly useful, to the local actors?
2. Build local-level tools and skills
Through their research, Development Gateway found time and again that even if they had the desire and bandwidth to use the collected results data for program management, few people had the right tools and skills to do so. A lack of data analysis training, for example, meant that many organizations were simply passing raw results data points up the reporting chain, missing the opportunity to determine for themselves how that data might be useful in improving their impact. Lack of proper data collection tools and training also prevented many from capturing the nuances behind the blunt results data they were required to collect.
Many LabStorm collaborators shared similar stories. For example, the director of a well-building project in Malawi could only collect data on the number of wells built. He didn’t have the tools needed to gather impact data on what he actually wanted to know: did his the new system of wells enable a family to eat more than one meal a day? Are more children able to go to school? These are the kinds of data points that would enable him to do his job more effectively, but he often didn’t have the time, tools, resources, or incentive to collect them.
He also, like many other people delivering services, relied on paper forms to collect their data. This sometimes led to a degree of unreliability in the data that reduced its usefulness for program management. Could simple injections of resources to improve data collection and analysis tools boost the results data’s reliability and usefulness to local actors, and create a resulting domino effect of reliability and usefulness up the reporting chain?
3. Prioritize indicators.
Instances wherein local actors were able to make actionable the results data that they were required to collect were few and far between. However, this doesn’t mean local decision makers aren’t using data. Many are using data to great effect, and Development Gateway found that learning from these successful organizations about the type of data that is actionable for them could be very useful in helping funders/donors shift the content of the required reporting.
One such successful example can be found at the best hospital in Ampara, Sri Lanka. It has short lines, a ticketed first-come-first-serve patient intake system, and regularly administers surveys for the quality of their services. Hospital leadership has established a culture of excellence which affects the quality of work and services across the board. As a result, there has not been a single maternal death in the past 3 years, and everyone knows it. But this data, and the practices the data supports, are not required by the district, nor do they show up in reports. Detailed information about how the steps they have taken affect health outcomes are crucial to their day-to-day work, but rarely find their way up the reporting chain.
A District Medical Officer states, “there should be new indicators developed for outcomes, as most of the data we collect are for activities.” Too much of what they are required to report doesn’t actually shed any light on how these activities are affecting outcomes, which makes it nearly impossible to adaptively manage their programs. Will prioritizing the types of indicators required – from activities performed to outcomes – increase local actors’ knowledge base and result in better outcomes on the national level?
Near the conclusion of the LabStorm, participants turned their attention to thinking about what role feedback could play in all of this. Governments, NGOs, and local actors are often sitting on mounds of results data. The sheer volume of required results data compounds the issues around immediate implementation discussed above. LabStorm participants suggested that feedback can, and must, have a place in determining when and how results data is used and translated into the local outcomes.
Feedback provides a different kind of analysis, and perhaps a different kind of filter when sifting through reams of results data. Can citizen voice help us determine which key results should be analyzed? If local decision makers had the tools, skills, and budget flexibility to incorporate results data into program management, feedback could provide a straightforward means of deciding how to prioritize next steps.
Feedback could also help determine what kinds of results data are actually most useful to local decision makers. This could include improved communication between funders/donors and local actors, in addition to increased flexibility for local actors to collect the kind of results data that they find directly actionable.
Over the next few months Feedback Labs looks forward to supporting Development Gateway as they continue investigating these questions, and we welcome your feedback in the process. What role does constituent voice have in results data? Contribute your thoughts below and check back here, on the Feedback Labs Blog, for updates.
Development Gateway’s Results Data Initiative analyzed the collection, quality, sharing, and use of data to determine appropriate planning and resource allocation. Development Gateway aims to influence the ways that development actors – both internationally and locally – approach the results data that they collect, share, and use. The goal is for these findings to prompt agencies and organizations to rethink the data they gather, and make sure that the right people can use this data for the right decisions. Read more about RDI plans to make data matter here, and continue to follow the visualization of their initiative here.
LabStorms are collaborative brainstorm sessions designed to help an organization wrestle with a challenge related to feedback loops, with the goal of providing actionable suggestions. LabStorms are facilitated by FBL members and friends who have a prototype, project idea, or ongoing experiment on which they would like feedback. Here, we provide report-outs from LabStorms. If you would like to participate in an upcoming LabStorm (either in person or by videoconference), please drop Sarah at note at [email protected]