As an international development community, we spend upwards of USD $2.5 billion each year on international development program monitoring and evaluation (M&E).
In principle, this investment provides data and information necessary for organizational accountability and learning, programmatic decision-making, and improved development outcomes.
But in practice – after speaking with over 600 individuals across country governments and donor agencies – we’ve found staff report little to no utilization of M&E data for decision-making. Despite such enormous investments, the international development community has not yet closed the loop on data collection, analysis, and use.
Most M&E systems are designed based on technological considerations of what data must be captured and how. Unfortunately, this approach fails to account for the expectations, incentives, and resources that impacts data uptake. These impacts, at both the organizational and individual level, make up “decision space”.
One’s decision space determines how much information and control someone has over human, financial, and technical resources, which in turn determines their data use.
For example, those at more central levels – a minister’s office or headquarters – tend to have greater clarity, power, and flexibility over resources. On the other hand, those at more local service delivery levels – schools, health clinics, district offices – often have highly constrained budgets and pre-determined priorities.
What does this mean?
Operationally, this means that local staff are expected to collect and report M&E data “up the system”. But in practice, local staff also exercise their decision space in terms of time: time spent delivering services, time spent meeting reporting obligations, and (extra) time spent analyzing and using information.
The competing workloads of service delivery and reporting can disincentivize data use. Limited resources to do something differently as a result of data can further disincentivize local data collection and use. And if poor-quality data are reported up, this can also disincentivize data use by central-level staff.
So how do we create a positive results data feedback loop?
We propose that achieving data use requires changing how M&E systems are implemented – which requires changing the traditional approach to designing M&E activities. Analyzing and navigating decision space provides a useful framework for developing more efficient and effective approaches to increase data use – and importantly, should involve intended data users themselves. By engaging with potential M&E data users from the outset, system-builders can more easily identify which data are relevant for the decisions of each individual; match reporting responsibilities and levels of disaggregation to needs and constraints; and ultimately create a system that facilitates data collection, sharing, and use.
To further explore decision space and investments needed to strengthen the use of evidence, join Development Gateway on Monday, December 10th for the event Strengthening Foreign Assistance through Results Data Use.
This blog is based upon the DG white paper Understanding Data Use: Building M&E Systems that Empower Users.