Author: Renee Ho
In this piece, we look at how acceptable forms of evidence—and who creates this evidence—are shifting in applied medical science. We ask: if information about what works in medicine is changing, then shouldn’t social policymakers also be considering new forms of evidence?
If a practice works for a particular institution then it adopts that practice. It doesn’t sound revolutionary, but it is. “Best fit” (what works for a unique situation) over “best practice” (what works on average or according to an “expert”) has not always been the norm but is gaining ground.
For years, medicine has relied on clinical trials of medical interventions to determine what works. However, the reality is that “what works” can vary tremendously from place to place. Clinical trials are not designed to fully consider the complexity of real situations. They often fail to consider:
- The interactions between clinical practices, or
- The differences in intervention administration across institutions or contexts 1
Eppstein MJ, Horbar JD, Buzas JS, Kauffman SA (2012). L
When measuring the impact of the same intervention, researchers find differences in mortality across different participating institutions (Gray 1994 and Horwitz et al, 1996).
This is despite the fact that a randomized controlled trial (RCT) found the intervention to be helpful on average.
Knowing the average effect isn’t good enough. As a patient, you want to know what works for youryou. Indeed, what constitutes “evidence” in medicine is changing: patients are in control.
A move from strictly RCT evidence-based medicine to one increasingly focused on patient-centered care is underway. While the two approaches are not mutually exclusive, patient-centered care provides
“care that is respectful of and responsive to individual patient preferences, needs, and values, and [ensures] that the patient’s values guide all clinical decisions.”2
Patient engagement occurs at every stage of care design and implementation. The result is positive: patient experience and health outcomes are often correlated (Manary et al, 2013).
Not surprisingly, a good way to measure patient experience is by asking the patient. In one study, “The Impact of Patient-Centered Care on Outcomes,” the authors find that patient-centered practice is associated with improved patients’ health status. However only one of two measures of patient-centered practice showed this result: patients’ perceptions of the patient centeredness of the visit. The other measure in the study, based on ratings of audiotaped physician-patient interactions, was not directly related to health status (Stewart et al 2000).
Isn’t it about time aid and philanthropy did the same?
How do we take advantage of what the patient knows and how the patient feels? How could these methods also be used with beneficiaries of aid and philanthropy projects?
“[narrative methods] facilitate processing thoughts and emotions related to healthcare experiences, identifying patterns of health or healthcare choices, or reflecting on clinical interactions (and course-correcting when necessary).”
Working with the Robert Wood Johnson Foundation, they created the Healthcare Narrative Playbook to help patients, providers, and caregivers communicate better in terms that make sense to patients.
Storytelling inverts the power dynamic. It acknowledges that doctors are not always the experts. Rita Charon, founder of the Program in Narrative Medicine at Columbia University writes that
“narrative knowledge leads to local and particular understandings…” 3
and explains that a close reading of narrative allows clinicians to become
“better perceivers of multivalent scenarios.”4
Not all doctors are comfortable admitting that they aren’t the experts, but to stay in business, they’ll have to respond to new patient demands. Patients are demanding more information from their physicians than they did in the past (Mechanic et al 2006). Regular people are gravitating to popular books like Leana Wen and Joshua Kosowsky’s When Doctor’s Don’t Listen: How to Avoid Misdiagnoses and Unnecessary Tests and Heidi Julavits’ article, “Diagnose This! How to Be Your Own Best Doctor“. This literature isn’t saying: don’t go to the doctor. It is saying, however, that you should be more vocal and engaged when you do.
The exact causal mechanism for this is unclear, but the authors suggest that during this time period, individuals’ exposure to health information grew. New information sources and a general increase in health information-seeking behavior have resulted in patients accessing more information, not only from medical professionals but also from others. The authors find that most individuals do not rely on a single source of information.
Information in the hands of subjects (patients, citizens, beneficiaries, etc.) can be powerful. Online patient forums can function like a patient’s own “QIC”. She can find out what other people with similar symptoms have done and try the same methods herself. People frequently find these forums more useful than the algorithmic symptom checkers that functionally mimic what the “expert” doctors do.
Call it crowd sourcing, or maybe “citizen science.”
“…[clinical trial] results are often inconclusive or may not be generally applicable due to differences in the context within which care is provided…. Health care systems [are] complex adaptive systems of interacting components and processes… learning by doing in small, local tests may be more effective than large-scale randomized clinical trials in achieving health care improvements” (Eppstein et al, 2012).5
Medicine, it turns out, is very much a social and environmental science. Patients exist beyond vacuum-sealed physiological and biochemical levels and at social and cultural levels too. They are treated in institutions that vary. If you talk to physicians, they describe their medical “practice”. A “practice” suggests that their work— improving the quality of healthcare for their patients—is one of constant experimentation and learning through doing. In fact, there is a whole “science of improvement” in healthcare that emphasizes rapid-cycle testing in the field in order to generate learning about what changes, in which contexts, in order to produce improvements.
Sadly, we don’t know for certain what we know. In a review of highly cited clinical research studies, John Ioannidis finds that of the 45 studies claiming an intervention was effective, 32% were either contradicted by subsequent studies or found to have effects stronger than those of subsequent studies.
“…search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation.”8
“Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA). Center for International Development, Harvard University. Working paper No. 240.
As in medicine, storytelling or narrative methods could be used to better understand how project beneficiaries view the world. This would potentially allow policymakers to better perceive and address multivalent scenarios. In other words, it would allow them to finally do something about the “context matters” challenge to scaling a program across different populations.
If we agree that people often know best about their own condition, why don’t we just ask them about it? Perhaps, as in medicine, this feedback could be predictive of longer-term outcomes. Perhaps beneficiary feedback could be correlated to impact.
Storytelling and the narrative method have long existed within the “toolbox” of project designers and implementers under various guises: open-ended surveys, free-ranging key informant interviews, and focus group discussions. These storytelling tools can reveal the contextual variables or design details that may be key to the success of an intervention. A close reading of a case study might tell you more about causal mechanisms than simply the average effects of an RCT.
Recently, GlobalGiving launched a storytelling project in which over 60,000 stories were collected from project beneficiaries. The stories were prompted but otherwise open-ended, allowing the implementing organizations to better listen to beneficiary needs and desires that are not always captured in a prescribed survey, designed by “experts.” This began a conversation between project beneficiaries and leaders, but the question remains: how do we make this a regular practice that helps close the loop and improve services and products based on beneficiary feedback?
Human Centered Design (HCD) thinking uses storytelling in another way. A storyboard is a quick, low-resolution prototype of a product or service that can be presented to people to get their feedback. The tool presents a sequence of images that chronologically show what happens during provision of the service, much like a comic strip. With this tool, stories are presented to people to help provoke a reaction and get them to respond with their own opinions and stories.
The Consultative Group to Assist the Poor worked with the design firm IDEO to help a Brazilian bank develop payment products for lower-income individuals. They began by developing multiple storyboards. Customers empathized with the protagonist in each story and imagined themselves as the theoretical users of these products. They told stories— in their own words—about how they would use the products (or not), and most importantly, how and why. This allowed the bank to design products that met customer wants and needs, some of which they would not have foreseen without having this kind of conversation.
Alternative tools—like having open feedback from project beneficiaries—can provide an opportunity to learn more about how and why certain programs work in different contexts. With these tools we can better develop meaningful theory to ground our experimentation as we search for the “best fit” solution.
An imperfect analogy is “Whack-a-Mole”: without theory we will perpetually be striking with the same blunt interventions. Often, we will miss without learning anything. Sometimes, we’ll successfully strike but even then, we will not have learned much.
Let’s not abolish one tool for another. Let’s think, instead, of having more tools at our disposal. Use those that allow us to learn and ultimately succeed.