Facilitator: Marc MaxmeisterAugust 25, 2017

Share this:


We started DataStorms because we realize that the barriers of collecting, analyzing, storing and sharing data may feel insurmountable to some organizations, especially ones with limited personnel, capacity and time. At our last DataStorm, Marc Maxmeister of Keystone Accountability and Nick Hamlin of GlobalGiving proposed ways in which the kinds of data we collect and how we store and organize it can facilitate sharing information across teams, across organizations, perhaps even across sectors. While not every organizations can reach the pinnacle of data sharing of APIs and JSON files, there are still steps they can take to achieve data interoperability. In changing the way we share information, we can collaborate to solve the world’s biggest problems.

We have reproduced Marc’s reflections (originally feature on his blog) on how if collaborations are going to be efficient our work needs to pull from each other’s data more easily with permissions from the author.

This is interoperability.

When you collaborate on writing a document or on filling in a spreadsheet on google docs, you are doing it. When you get together in rooms to talk strategy, you are also doing a version of it. But in the first case you’re co-creating data and in the second you are trusting the other people in the room to interpret their data correctly.

We are asking: What would it take to drive down the cost of team collaboration on a data level, instead of just on a strategy level (where we trust the other’s interpretation of data)?

Lack of trust is a big deal. Here are just a few of the perspectives in any room of collaborators:

  • Data Scientist: “About 90 percent of the work is cleaning data and preparing them for computers to analyze.”
  • Aid Actor: “About 900 percent of the of the work is getting management to understand which numbers affect reality.”
  • Local Change Agent: “Leaders live in their own world and their numbers don’t reflect our reality.”

In today’s international development world, the local change agents are usually excluded from the room.

The data people are usually called just before the meeting and again long afterwards, to prove an idea worked. Aid actors swallow up the seats and control how and when others participate. This guarantees failure.

The solution is to change the way teams work together, so that these everyone contributes in a meaningful way to what is created and how we define “success.” Recent efforts from USAID (Localworks) and DFID (UK Aid Connect) are heavily focused on this approach.

The solution:

  1. Getting teams to work better requires us to change the way we share information. The aid world is stuck relying on broken data exchange methods, compared to the tech world and the world wide web that they created:


    Of these, people often think excel documents are the main data, but it is mostly things “overheard” on village ridealongs, conferences, meeting rooms, and in passing that drive decision making. Even Bill and Melinda Gates – with their vast access computing power — most often cite personal exchanges (overheard) when justifying a direction. We can’t underestimate the power of hearsay. And it is essentially missing from the data side. Most of peoples’ lived experiences in the “Global South” are not tracked as data in 2017, in fact.

  2. We need to build trust by contributing our perspectives into building shared definitions of success. This is where the future-tech behind FeedbackCommons.org – a website I’ve been helping build for the last 3 years – contributes. Organizations typically define success by imagining what it would take to change the lives of people in a disadvantaged community, and then working backwards from that to define measurable progress towards that outcome. The problem is that these outcomes are unattainable, and daily work adds minuscule amounts to a person’s quality of life.

    There’s a different way to create valid, meaningful goals. Instead, set up an ongoing dialogue with the people you aim to serve and keep asking them, “does this seem to make any difference?” and “how satisfied are you with what we doing?” and “what can we do better?” This sets up a standard of progress that one can manage to on a day-to-day time scale. It creates expectations and norms around any type of community effort, no matter what its design, so long as the people it aims to serve are similar. And when multiple organizations work on common goals or with a shared cohort of people, it provides a comparative definition of success that breaks the tyranny of always aiming to change the world when that might not be possible in one lifetime.

    Instead of, “did we change lives?” we should be asking, “does the work we do satisfy the people whose lives we are changing?”

    These satisfaction scores vary. Where interventions are really hard and nothing appears to change (like changing rape culture), the score will remain low – but they’ll be consistent from one actor to the next. Where interventions yield rapid results (like giving micoloans), the scores will be high. But it’s more satisfying to listen, respond, and act on feedback than to hide behind the token “impact stories” that pass for rigor at virtually every charity I’ve ever seen. Narratives can inspire us, but it takes feedback to drive the disruptive innovation that will change lives for whole groups of people.

    Whatever that level of satisfaction you measure turns out to be useful. It can define the “good for now” for any organization. And you can drive that score higher by listening, responding, and acting on feedback.

    This is how for-profit companies manage customer relations.

  3. We need to strive for a deeper understanding of peoples and situations in our data hunt. The barrier is that our data is sparse and comes from just a handful of personal experiences. We can’t vacuum up meta data from the poorest billion peoples’ lives the way data companies do with the richest billion. There is no deep, public source of data on what it really feels like to live on $3 a day.


    rich thick data


    The world’s biggest actors depend too much on the weakest forms of data, and this information isn’t up to the task of solving complex problems. We forget that the “now” way of working is temporary. Ten years from now we’ll be able to shoot video of someone and have google annotate the moments in that video. It will tell us how a person felt as they spoke each word using facial emotional recognition. Translation will be automatic. Images in the background will fill in the context of where this person was speaking, and what their demographic and economic status probably was by the presence of huts or cinderblock or glass buildings. Their clothes will tell us what type of work they do, and where they shop. The shadows will reveal what hour of the day it was, or if at night, what time of year (based on star patterns). If this is using wifi, geolocation will give us pinpoint accuracy on location. Big data is getting thicker. Are we ready?

    This is the future, but it can only be part of a brighter future if do-gooders and activists are striving to make it part of their listening designs. Otherwise, it will be used by the NSA and Russian hackers and criminal thugs by default, and will be instruments of evil.

    To solve poverty, or to end tyranny, we need to move our listening efforts from the left side to right side in the chart of deeper data sources.

    But before we can be prepared to take advantage of what is to come, we need to make a few minor adjustments to how we work today. Getting organizations (who are still stuck in a world of spreadsheets, sadly) to follow a few basic formatting rules would allow us to merge our existing data sets, and solve the power-problem that plagues-statistical models.

  4. In our discussion, someone asked, “So is your solution more like HTTP or the MLA handbook?”
    “Are you writing with a pen or scratching notes on the ground with a rock? That is the question,” I replied. Here’s how you “write with a pen,” so to speak:


    data formatting


    If we can get out of the red and at least aim for the happy medium yellow smiley face, we’ll learn more from our efforts to help people. Some day we’ll move into the green, or use software that does this for us.

    For example, doing the same work on google docs instead of excel provides data scientists with an API that can let them interact with the data directly, and solves the utf8 encoding problem.

    One European member of the conversation pointed out that using google docs also opens up a whole mess of data privacy problems. So solutions may involve trade offs.

    My talk was titled, “self-emergent standards, or how I learned to stop worrying and love the mob.”

    The road to solving the big problems is going to be bumpy. It’s going to require trust. And giving up control, as we create a shared definition of attainable success. This is what the first step looks like. And I invite you to join the discussion at FeedbackLabs.org and FeedbackCommons.org.

Leave a Reply