Most organizations are out to change the world. This journey puts them on the chase for two things: that compelling story and that hard evidence that their intervention was the most effective thing that has ever been tried. A few organizations end up following these practices, because the experts advocate for them. They work in controlled research labs, after all:
- Baseline data at project onset
- Final data at project completion
- Regular intermediate measurements along the way
- Standardized, vetted outcome indicators, based on these measurements
- Narratives and perspectives from beneficiaries and stakeholders to complement the measurements
While necessary, having all of these may still not yield that hard evidence. It won’t prove causal relationships between the thing you did and what happened. And it won’t yield reliable predictions about what investment or project inputs yield impact. For that, you need to consider three more things:
- If you’re a funder, were you the major funding source for the people or organizations that carried out the work? Was this true for every year of the work? If you’re an implementing organization, was your support the most important kind of support these people received? If you’re not sure, ask them! This must be true if an organization wants to claim attribution – e.g. what happened did happen because of them.
- Do you capture the local context? What were the other important things in the lives of the people affected? If you’re not sure, ask them! People are surprisingly knowledgeable about the things in their world that matter to them. Proving causality – e.g. this happened because that happened – depends on knowing that other factors are not important. And outside of labs, the only way we’ll know what other important confounding variables affected our interventions is to develop a relationship with the people.
- Does a significant portion of your data describe failures as well as successes? Naturally, as we are all determined to be successful, this might seem counterintuitive. The renowned physicist Richard Feynman once said, “The first principle is that you must not fool yourself — and you are the easiest person to fool.” Every example of success is built upon layers and layers of repetitive failures. About 80 percent of small businesses and non-profits fail within the first 7 years, and yet we act as if failure is not an option. Accurately capturing examples of failure in the data is just as important as capturing success, because predictive models can only divine patterns if the data is honest about which is which.
A useful thought experiment proposed by my colleague Caroline Fiennes at Giving Evidence to tell whether your data is truly foolproof (in the Feynman sense) is this: Take the last year’s final reports from all the grantees, pass them to a gaggle of naive people, and ask them to read each narrative. Then, have them place the reports into three piles: success stories, failure stories, and mixed-successes. If each person ends up putting different reports into each pile, you’ve got a serious problem. You’ve been lying to yourself and probably using tortured language to avoid describing failure. If every report ends up in the success pile, you’ve got a different kind of problem. Either these reports are blatant lies or this foundation has a complete aversion to risk-taking.
What is success, really?
A scientist friend of mine once asked at a NIH Grants review panel meeting, “What’s the opposite of incremental change? Excremental change!” And that’s what you’ve got if all the reports are in the success pile and none are in the other piles. It’s better to accept a healthy dose of failure alongside a few successes, because that is what incremental improvement looks like. Gene Krantz, the author of Failure is not an option (about NASA), oversaw a ~50% failure rate (rockets exploding!) in the first three years of the space program. Even worse, 40% of all space shuttles exploded. But after a half century of incremental change, NASA has a 94% success rate overall.
Now consider: Almost all philanthropic interventions are funded and tested for at most three years. Think about it.
So these are three things we will need if we ever want to rise to the higher echelon of attribution, causality, and prediction modeling.
Marc Maxmeister is the chief innovator at Keystone Accountability. As a PhD neuroscientist, he leads our efforts to develop new and better solutions. He has taught graduate-level Neuroscience in Kenya, Python to middle school students in London, UK, and mentors young professionals in our Data Scientist in Training program. He blogs at chewychunks.wordpress.com and is the author of several books, including Ebola: Local voices, hard facts (2014) and Trello for Project Management (2015).