This piece was originally posted with the Centre for Youth Impact.
The most useful reflection for me following last year’s excellent Feedback Labs Summit in Washington DC is that there is a useful coordination of impact measurement and constituent feedback that could help to get us closer to achieving the human projects that are the heart of well-managed social programmes. There is, arguably, a tension between these two progressive aspects of philanthropic and social endeavour that could be made more salient and tackled more directly.
Much social impact measurement tends towards the perfectionist. We see this tendency in the ambitious projects of the social genome project in the US or the S/E ratio of Northampton University in the UK. Approaches of this kind aim to pin down a set of human universals, like butterflies in glass cabinets, and then measure social programmes according to a single standardized set of indicators that ‘match up’ to this particular narrow plan for human accomplishment. This has the merit of cutting through the complicated tangle of human activity, but runs the risk in so doing of throwing the baby so far out with the bathwater that she hits the stratosphere, condemned forever to mournfully rotate around our simplified world.
There are other ways to think of measurement.
We used to rely on our bodies to apprehend the world. Our measures were anthropomorphic. We took the measure of the world with our limbs: the foot, the arm, the palm of the hand; and our physical prowess: the throw of a stone, the carrying distance of a voice. Ours was an embodied taxonomy: a taxonomy that shaped our technology. The handloom could only weave a cloth the length of an armspan, so that the armspan became, and for a long time remained, the measure of cloth.
We lived according to our bodies. Our architecture was shaped by them. The best rooms were on the first floor at the level of the eyes and the mind while cooking, cleaning and servants’ quarters were, naturally, kept at the rear.
We measured and we designed as a means of achieving and accommodating our human projects. Medieval land measures were based on two things: the amount of ground that could be covered by a single man working with his hands, or that could be sown with a set volume of seed. Both measures allowed for useful differentiation between the type of land being worked. Important distinctions in soil quality and type were accounted for by them, and a meaningful equivalence was established between a smaller, richer stretch of land that could be densely sown for a fair harvest, and a larger, less fertile field that would be sown more sparsely for the same return.
These were not platonic measures. They weren’t abstract, or idealistic. They were informative, practical and useful. They were close to our lived experience. They drew from that experience and shaped it. They were predictive, but in a pragmatic way that took into account the vagaries of the weather. They were easily understood, and best understood by those most closely concerned with them. An experienced labourer was better able to measure using a seed measure than a tax collector, and that labourer knew that his measure was predictive of a specific desirable result: enough food for his family, rent for his landlord, tithe for his church and feed for his animals. The labourer also knew how to game the system in his favour when he could, and that is not the least significant aspect of the medieval system.
Then we standardized. We abstracted. It was no longer my foot; it was the foot. We were attracted by perfection and drawn to systems that set absolutes and universals and took us further from the projects ordinary people set out to achieve. We created a new set of professionals to administer those measures. Surveyors took over from labourers as the most able measurers of a hectare. The human project of farming for a fair harvest became by that change a little more symbolic, and a little less predictable for the farmer as power and knowledge shifted away from him.
It looks increasingly as if we lost something important when we transitioned to systems that set standards far from our own daily experience. It would be unfortunate if our social impact measures learnt primarily from the history of metric standardisation and failed to see the potential of a more human-centred set of – by all means standard – measures. There are important things at stake in the development of the field of social impact measurement: power, agency, choice and freedom, and who gets access to these good things. We aren’t currently on track for an equal distribution.
Closing feedback loops could be the key to recalibrating our systems and our social programmes towards the human projects that people really want to achieve. In just the same way as measurement, feedback systems are not a neutral, objective means of gauging a fixed and certain set of answers. By asking questions, and some questions rather than others, we encourage people to think of themselves in certain sorts of ways. The conference gave us some examples, such as the Indian women who were learning to read and write in order to give feedback, that touched on the ways in which feedback systems that act on constituent voice can shake up existing relations of power.
We need to give much more careful consideration to the ways in which measurement changes who we are and how we understand ourselves. This is not because some measures are in an absolute sense better than others (although they may be so contingently). It is because measuring human beings is an intensely and indivisibly creative process. It gives shape to our aesthetics and to our values. We become happy and mindful ‘change agents’ rather than productive and efficient workers in part because we choose to measure some things rather than others. The Ashoka presentation at the summit touched on this in passing. Measurement whether of outcomes or of point of view is never neutral. In considering the kinds of measures we use to account for social impact, we have much to learn from feedback, not least the core principle of putting the achievement of human projects front and centre. Those of us who measure social impact should be asking ourselves the same question as those implementing feedback systems: who sets the standard?
About the author:
Genevieve Maitland Hudson leads on research and evaluation at UK consultancy Osca. She works on projects in health, education and organisational change often developing new research methods to suit different people and different styles of delivery. Recent projects have included evaluation of Year of Care implementation for the London Borough of Islington, developing new evaluation measures for a youth employability project and rethinking measurement use in management training for Kent County Council.