Published 2/13/15 by Caroline Fiennes of Giving Evidence
Charities produce masses of evidence about their effectiveness but we suspect that much of that research is missing (unpublished), ropey (uses poor research methods), unclear (so you can’t tell whether it’s ropey or not) or you can’t find it (because it’s only published on the website of an organisation you’ve never heard of: there are virtually no central indexed repositories). It’s thought that fully 85% of all medical research is wasted in ways like this.
This damages beneficiaries in two ways. First, donors and other operational charities can’t reliably get feedback on whether a particular type of work is effective, so may avoidably implement or fund something suboptimal. And second, the research consumes resources which could perhaps be better spent delivering something which does work.
Hence Giving Evidence, a UK-based organisation aiming for charitable giving to be based on sound evidence, works on availability, quality, findability and clarity of charities’ research.
We know that much charity research is unpublished: when I was a charity CEO we researched our impact, and when the results were good we published them, and when they weren’t we didn’t. I’d never heard of publication bias but I had noticed that bad results make for bad meetings. In our defence, we were just responding rationally to badly-designed incentives.
We suspect four reasons that charities don’t publish their research.
- First, incentives, as outlined.
- Second, they may think that nobody’s interested. By analogy, a campaign in the UK to get grant-makers to publish details of all their grants (which few do) found that many foundations were open to doing this but simply hadn’t realised that anybody would want them.
- It’s unclear where to publish even if you want to: there are few repositories or journals, and no standard ways of ‘tagging’ research online to make it findable.
- Commercial confidentiality given that charities often compete for funding and government contracts.
This first study focuses on UK charities supporting people with mental health issues: the extent and causes of non-publication. Subject to further funding we’ll look for publication bias: whether the chance of research being published depends on who does it, how positive the answer is, and whether it involves feedback from ‘beneficiaries’.
On research by charities being hard to find and unclear, Giving Evidence is working with charities in criminal justice. We’re creating a standardised, structured abstract to sit atop any research report by charities (detailing, for example, what the intervention was, what kinds of people were served and where, what the research was (sample size, how they were selected), what outcomes were measured, the results, the unit cost). It may detail whether feedback was used in the intervention design and/or data collection. This abstract borrows heavily from the checklists for reporting medical research which are thought to have markedly improved the usefulness and quality of medical research. We’re also looking at creating, not a central repository as such, but open meta-data to allow charities to tag their research online and a central search ‘bot’ (rather like this) through which donors, charities, practitioners and policy-makers can rapidly find it. This would also improve the ability of tax-payers, beneficiaries and anybody else to see research which affects them, and to see the gaps where more research is needed.
And on charities research being ropey, we’re working with a foundation to assess the quality of research that their grantees produce. Quality of charities’ research has also barely been assessed. We know of just one study: a UK foundation found that about 70% of research it received from grantees was what it called ‘good’ and some appeared to be totally fabricated.
Medicine has made great strides by enabling front-line practitioners to make decisions based on sound evidence – since in their world, like ours, the right best course of action isn’t always evident. Hence medicine devotes considerable resource to figuring out how much research is ropey, and why, and fixing it. They have whole teams devoted to improving research reporting, to make it clearer. Other teams look at ‘information infrastructure’ to ensure that evidence can be rapidly found; and many people study non-publication and selective publication of clinical research and work on rooting it out. This is very much supported by front-line ‘beneficiaries’: one of the most ardent advocates to get all clinical trial data released is MumsNet, an online community who realise that their lives and children are directly affected.
Finding the true effect of programmes is a type of science – fundamental to which is that results should be repeatable, and not flukes. Mainstream science is reliant on a growing movement to make data about experiments and results transparent and open so that people can see whether they can repeat the results. The social science behind charitable programmes is precisely the same: the methods and results need to be clear and findable enough to see whether results can be repeated, and indeed whether the research is robust enough for that the conclusions to be valid
Thus meta-research – research about research – is essential to improving decisions. Far from just technical and dry, good meta-research can help improve and save real beneficiaries’ lives.
Giving Evidence’s meta-research and work on the information infrastructure are, we think, important steps. We’ll report later on what we find.
If you are interested in investigating these problems in sectors where you operate, or in getting involved in Giving Evidence’s work, do contact [email protected]
Caroline Fiennes is Director of Giving Evidence, and one of the few people whose work has featured in both The Lancet and OK! Magazine. She is author of acclaimed book It Ain’t What You Give, It’s The Way That You Give It.
This talk (17 min) rattles through the issues of quality and incentives in charities’ research.