LabStorms

LabStorms are collaborative problem-solving sessions designed to help an organization wrestle with a feedback-related challenge, with the goal of providing actionable suggestions.

The LabStorm Methodology

LabStorms are collaborative problem-solving sessions Feedback Labs hosts with about 10-15 other organizations in attendance aimed at helping the presenting organization tackle a specific challenge or hurdle related to feedback.

LabStorms are held every two weeks, on Thursday mornings at 10:30am ET, and last for about an hour. The session begins with a 15-minute overview from the presenting organization about their work and the challenges they are facing, ending with three specific questions that they would like help answering. The attendees spend the rest of the session working together to answer the discussion questions and provide support and advice to the host organization.

LabStorms are conducted with a Chatham House Rule atmosphere that is conducive to deep discussion and connection. Attendees offer creative ideas for how to tackle challenges, and share their own experiences with similar challenges. Presenters leave the session with a few actionable leads, meaningful connections, and new ideas they can apply to their feedback challenges.

Would you like to participate?

These sessions are invite-only. If you would like to join the LabStorm community and participate in these discussions, please sign up for the mailing list.

Upcoming LabStorms

RNW Media

January 16, 2019

10:30 – 11:45 AM ET

View on Google Calendar

ICS Centre

January 31, 2019

10:30 – 11:45 AM ET

View on Google Calendar

Past LabStorms

Feedback Labs has hosted over 100 LabStorms since 2016! Take a look at the content from past sessions.

2019

Human Nature Projects

Human Nature Projects is a pioneer in communityconservation, encompassing 1200 volunteers in 102 countries. Six months in,their network is rapidly growing but this scaling carries with it theinevitable challenges. They must break down barriers – mental and physical,cultural, linguistic and geographical – and there remain many impediments tothe open exchange of ideas they envision.

Moving forward, HNP is hosting a LabStorm to knuckle out these issues and discover the best means ofmaintaining momentum whilst ensuring effectiveness throughout its internationalactivities. In this session, we will explore how best to design a platform thatreflects a global interconnectedness of vision, providing people with the powerto protect that which they hold most dear.

Discussion Questions:

  1. How can we make sure we are including diverse voices in our global network?
  2. Which measures and structures would best facilitate exchange for such a global network?
  3. How can other NGOs be incorporated into this framework for maximum impact?

Fonbnk

Fonbnk is an emerging provider of blockchain-based mobile money solutions; turning any prepaid mobile SIM card into a bank account. With Fonbnk, you can buy and send mobile data to anyone anywhere in the world instantly. For international NGOs, this platform offers global low-cost mobile data access solutions with accountability. Fonbnk also has great humanitarian applications, especially during disasters, because it allows people to donate mobile data to recipients across the world with no delays or fees. Recent clients include a Medical Aid NGO in Malawi and Malaria Prevention in Ghana. Fonbnk currently resells mobile access from 600 carriers in about 200 countries representing ~4.5Billion people, and is continuing to expand. As they grow, they must consider how to best collaborate with the social sector to deliver mobile data to those who need it most.

Discussion Questions:

  1. How should Fonbnk engage with the NGO community?
  2. Who are the appropriate representatives to engage?
  3. How do we build constituent feedback into our humanitarian work?

WITNESS

Description:WITNESS.org is a global organization that has collaborated with 420+ groups in over 130 countries since its founding in 1992. Specialized in the use of video for human rights, WITNESS has supported partners using video to expose and preserve evidence of war crimes, protect indigenous land rights, denounce police violence, defend immigrants, fight hate speech, and more. With a team based around the world, WITNESS is always learning about new tactics –and gaps— in the global field of video-for-change. And while each local context is unique, time after time we’ve seen activists in one place grappling with problems that activists in other countries have solved when trying to use video safely, ethically and effectively. So we’ve committed to nourishing this knowledge – serving as a conduit when it makes sense but also bringing people together to learn from each other and ensuring these learnings are documented, systematized, available. Our online Library, for example, allows visitors to download/remix/share 180+ training resources in over 27 languages. Growing downloads and shares of these materials make us happy, but our end goal is that this knowledge contributes to real human rights impact, not fancy metrics. So for this Labstorm we’d like to draw on partners’ experiences to help us think through the following questions:

  1. How do we figure out which local learnings (for example, local advice and tactics for using video safely and effectively to defend/protect human rights) have global salience/value?
  2. How do we streamline and share these globally salient tactics and learnings, without falling into a “copy-paste” mindset?
  3. How can we get high-quality feedback (not just anecdotes) from people who download and use our materials so we can continue improving our global knowledge base?

Africa's Voices

Africa’s Voices, a Kenya-based organisation with a mission to enable individual and collective citizen voice to drive accountable service delivery and governance, has developed a one-to-one conversational channel for engaging citizens via SMS in co-designing interventions that aim to improve their lives. We strive to close feedback loops between citizens and service providers by focusing on meaningful conversational engagement with people, on their own terms and language – by interpreting “messy” subjective data. The objective is two-fold: service deliverers (governments, NGOs, humanitarians and others) design programmes and policies that are grounded in the desires, needs and experiences of the people they serve, while citizens become active participants in decisions that affect their lives and are able to hold those service providers into account.
We would like to use this LabStorm to demonstrate Africa’s Voices’ two-way channel for feedback and accountability and to spark a lively discussion with partners in the accountability space, particularly in the context of international development, about how it could be used to facilitate vibrant, interactive communications with programme beneficiaries, allowing them to become co-designers of interventions that aim to improve their lives.

Discussion Questions:

  1. What’s the unique value that is enabled by unstructured conversations?
  2. How do we communicate the value of subjective “messy” data?
  3. What place does subjective data derived from citizen feedback have in the crisis of policymaking?

What was the impact of this LabStorm? Read the recap here.

TISS-SVE

About Tata School of Social Sciences – School of Vocational Education

The Tata Institute of Social Sciences is a premier institute of Social work in India established in 1936. It received the status of a University in 1964 and is now one of the top-ranked Universities in India. The School of Vocational education was set up in to provide immediate and definite interventions to improve the skill levels of millions of youth in the country through appropriate vocational education programs

Over a period of time after a few initial setbacks, TISS-SVE has succeeded in developing a work-integrated training model of vocational education involving different types of partners with specific roles in the delivery of vocational courses.

The model is self-sustaining, low-cost and scalable. At the moment we offer 33 B.Voc. programs in 19 different sectors with the help of 19 vertical Anchors, more than 240 hub partners and have more than 8000 students currently pursuing their B. Voc. Programs with us. More than 2000 students have completed their B.Voc.degree with us and more than 70% of them found employment immediately after the course.

We have also setup a system of gathering feedback every month from 10-15% of the stakeholders either through telephone conversation or through reports sent in by counselors after their monthly life-skills sessions. Once a year we organize a Hub-meet and connect with larger group of hubs to appraise them with our concerns/achievements and get their suggestions. These are used to take corrective measures for day to day operations and students concerns.

For the first time in May-July 2019 we conducted 360-degree feedback through telephone interviews of some stakeholders and online surveys for other stakeholders in the ecosystem. We will present some of the significant results in the LabStorm session.

Discussion Questions:

  1. How do we move from one-on-one feedback collection to large scale feedback collection and still retain the same quality and value of data? How often do we have to collect feedback when we are working at scale?
  2. How should we engage with partners to make sure that the 360-feedback results in changes?
  3. We know the TISS-SVE model is replicable. How can we use feedback data to show our impact of our model and make the case for the importance of vocational education in India and abroad?

What was the impact of this LabStorm? Read the recap here.

OVD-Info

Project summary 

OVD-Info has designed a data-driven primary constituent accountability project. We work with 5 main constituent groups within the scope of the RR initiative (activists, journalists, donors, readers and the OVD-Info staff). As part of the project, OVD-info collects feedback from various constituent groups through a wide variety of tools, including: F2F interviews, online surveys, bots, and analytics from our website traffic, social media, etc. We use different tool based on the way we engage with each constituent group, though most of the feedback and data we collect is entirely anonymous and at times hard to identify which constituent type it comes from.

Almost all of the quantitative information and data ((including number of calls on our hotline, number of likes on Facebook, average donations, etc) we collect is fed onto a dashboard we have created to help us track, interpret and analyse how we are doing on key indicators . An expert board of advisors (OVD-Info staff working on various things) has been set up to lead on the analysis and interpretation of this data, to then make changes. The qualitative data is analysed separately and followed up accordingly.


Feedback-related challenge 

During the course of our project, we have faced several issues. The first one is involving the whole team in the accountability processes. We have implemented some mechanisms – and it was made by the project team. However, we believe that this is what should be included into the mindset of the team. And we were not able to promote it to the team. The second issue related to closing the feedback loop, when it comes to quantitative feedback. Based on the analysis we do, we can implement many minor changes, which makes a user experience of our primary constituents better. But it is that small, that we have some doubts whether we want to communicate it back – since the number of such messages would be very high and may decrease loyalty of our PC’s to us. Also, some feedback we collect is quite sensitive and private and not everything can be transparent for publicity – from privacy point of view, as well as security.

About Resilient Roots 

The Resilient Roots initiative tests whether organisations who are more accountable and responsive to their primary constituents are more resilient against external threats. We are working with 14 CSOs – among which is OVD-Info – across a range of locations and issues to support them design and rollout year-long accountability projects. The initiative is coordinated by Civicus. Technical support is provided by Keystone Accountability and Accountable Now along with our regional partner for Latin America, Instituto de Comunicación y Desarrollo (ICD).

What was the impact of this LabStorm? Read the recap here.

Sopact

Sopact is a software company, based out of the San Francisco Bay area, on a mission to make Impact measurement and management simple. Our aim is to bring cutting edge technology to the social sector that is going to help organizations measure impact easily and to help organizations apply changes in the intervention methodologies to improve the impact they have on their stakeholders.

Since Sopact is deeply involved in analyzing stakeholder data to measure impact, qualitative and quantitative data plays a vital role in the process. We at Sopact are striving hard to improve the quality of data that we get from our stakeholders and this starts with asking the right questions and through the right medium. Through asking the right questions, we intend to give organizations the right feedback so that they can have the opportunity to improve their intervention methodologies to have an even better impact on people and the planet.

Discussion Questions:


1. There are many ways to survey people. What is the best way to reach people to get an honest response about their social sector experience (door-to-door survey, emails, mobile data collection, etc)?

2. Asking something like “would you recommend this organization to a friend?” might not tell us rich enough data about a customer’s experience and how it changed their life. What questions would get stakeholders to open up and truly reveal the impact that an organization has had on their life?

3. We want to build an open platform with resources for better Impact measurement and management practices. How can we standardize our client experience survey so that it works across multiple social sector contexts and we do not have to reinvent the wheel each time we assess a new organization?

What was the impact of this LabStorm? Read the recap here.

Accountability Lab

Integrity Icon, Accountability Lab’s flagship program, has become a global movement- on the ground, online and through the media- to celebrate and encourage honest government officials. We want to move away from “naming and shaming” corrupt leaders and towards “naming and faming” those bureaucrats that are working with integrity. Integrity Icon is a global campaign that was carried out in seven countries in 2018 with millions of viewers and hundreds of thousands of voters. Read more in the Economist here.

The goals of Integrity Icon are threefold: first to create role-models and celebrate honest public officials; second, to inspire young people by indicating that government is a career path in which one can work with integrity and honesty; and third, to connect and support the winners to help build coalitions to push for further reform and value-based decision-making over time.

The Integrity Icon process is evolving but essentially involves 4 steps over the course of a year:

  1. Nominations and Selections– citizens can nominate online or through SMS/Whatsapp. We also have networks of volunteers that collect hard-copy nominations in hard-to-reach places. Esteemed panels of judges help us select the top five Icons each year.
  2. Filming and Outreach– the five finalists are filmed- doing their jobs, talking about why it is important to have integrity and interacting with others who can vouch for their great work. These episodes are shown on national TV and social media and adapted for radio.
  3. Voting and Ceremony– citizens are made aware that they can vote for their Integrity Icons through social media, e-mail and phone. After a public voting period of two weeks, the Integrity Icons are crowned in public ceremonies including VIPs and the media.
  4. Coalition-building and Support– we work with the Icon community through summits, training programs, fellowships, events and retreats to begin to push for norm-changes within institutions, agencies, civil service training programs and schools/colleges.

Having run the campaign successfully in Nepal, Pakistan, Liberia, Nigeria, Mali, South Africa and in Sri Lanka (through a partnership), Accountability Lab launched Integrity Icon for the first time in Mexico last month.

LabStorm Questions: 

  1. We’ve learned that each context requires different low and high tech tools to activate participation. In the US, what creative ways can we use to encourage community members to join the conversation on what integrity in the civil service should look like? How do we inspire them to participate in the campaign?
  2. We never run this project without building civil society partnerships. What partnerships should we build in the US to promote and strengthen the program here? What are the levers for buy-in?
  3. Ancillary programs provide ongoing support to Icons, other civil servants and youth. What ancillary programs would work in the US context?

What was the impact of this LabStorm? Read the recap here.

La Maraña

La Maraña is a woman-led, participatory design and planning non-profit that promotes the inclusion and empowerment of Puerto Rican voices in the design and creation of our cities and communities. It is amidst and in reaction to the cruel aftermath of hurricanes Irma and María that our team dared to imagine a community-driven alternative to our future. Inspired by the communities we serve and motivated by our island’s deep need for locally-embedded, long-term recovery efforts, our team at La Maraña designed Imaginación Post-María. Combining participatory planning and design with the power of community granting and capacity-building, Imaginación Post-María’s 6-step model offers citizens direct power to imagine, plan and build the changes they desire in their communities.

After a year and a half of working hand-in-hand with three community partners across the island in order to bring Imaginación Post-María to life, we are faced with both the challenge and opportunity for growth and scale. As we sunset this initial phase of Imaginación Post-María, we hope to create educational, open-source deliverables that can spark bottom-up action across the island and guide our future Scalability and Growth Plan. Specifically, we will be creating a Toolkit that will outline our methodological approach and a Documentary that will use storytelling as a window to holistically capture these equitable, community-led rebuilding efforts.

Discussion Questions:

  • How do we scale our model humanely? How do we balance the freedom and openness of community participation with a replicable methodology that maintains our essence as we grow?
  • How do we devise fundraising strategies that offer our organization long-term stability?
  • We are being pulled to have a stronger voice in advocacy, but also need to finish our pilot project implementation, which requires focusing on our on-the-ground work. How do we use the creation of the Toolkit and Documentary as tools for scalability and advocacy simultaneously?

What was the impact of this LabStorm? Read the recap here.

Rootchange/Pando LLS

The web-based Pando LLS platform uses network maps and feedback surveys to visualize, learn from, and engage with development systems to foster increased local ownership. Pando LLS is rooted in four primary measurement dimensions that assess the vitality and autonomy of a local system: 1) Leadership; 2) Connectivity; 3) Mutuality; and 4) Financing. Root Change and Keystone Accountability will share how they have co-developed this tool through the USAID Local Works program. A lively discussion about how it could be used to facilitate learning and adaptation and track progress towards fostering locally led development systems will follow.
Discussion Questions:

  • Are these the right localization measurements? How else has your organization measured localization?
  • How could your organization use this? Is the process clear and actionable?
  • How could we (our sector) integrate this into our current work? What are the values/incentives for people to participate

What was the impact of this LabStorm? Read the recap here.

The Feedback Quiz

Most social development organizations believe it is important for people to have a meaningful voice in the programs that affect them. But how many really listen effectively? And how many respond to what people say? Feedback Labs has been working on the feedback quiz – a 10 minute online survey that will tell you your strong and weak points on your feedback loop and how you stack up against peers. With more benchmarking data and more tools to support feedback practice, version 2.0 is better than ever with better charts and visualizations, better advice, and a more accurate overall score. But to truly make the most of the quiz we need your help. Join us for a LabStorm tomorrow to explore how we can take the Feedback Quiz to scale.

  1. Discussion Questions:
    What can we do to attract quiz takers? How could you envision utilizing the quiz in your work?
    What is the value (if any) to multiple people at the same organization taking the quiz? If valuable, how many people at the same organization should we aim for?
    What else can we do to help quiz takers take the next steps in their feedback journey after taking the quiz?
    (Bonus): What comes next? What should FBL develop to complement/build on the quiz?

Nest

Today we incorporate feedback into our Ethical Compliance program and verify the dissemination of compliance policies and best practices from the business leader to the workers through our Compliance Assessment and a Worker Well-being Survey. The compliance model is a training-first program unlike traditional auditing programs and is dependent on human capacity.

One of our strategic priorities is to expand supply chain visibility and accountability and improve individual worker well-being and agency. We have to figure out how to scale our work within an informal and dispersed workforce while ensuring feedback loops with all stakeholders (including business leaders and individual workers) remain intact. We have a few ideas for where we could go with it and we’re looking at the FBL community to help us narrow the approach.

Discussion Questions:

  1. We may not always visit the same homes to conduct our Worker Well-being survey although the workers will be employed by the same central business. What threat—if any—does this have on the reliability of our feedback data?
  2. How do we stay connected with local resources and community structures (tribal leaders, community organizing bodies, municipal governments) in the long-term if we don’t frequently visit the same communities over and over again?
  3. What do we gain/lose by incorporating technology (like push messaging) to scale the reach of our Worker Well-being survey? What are cautions to consider?
  4. Creating a universal tool for surveying worker well-being across a variety of countries with different regulatory frameworks, cultures, etc. is challenging. How tailored can we/should we afford to make it in order to still scale our efforts? What considerations should we keep in mind?

What was the impact of this LabStorm? Read the recap here.

Kuja Kuja

Kuja Kuja is a start – up of the American Refugee Committee that began with a simple observation: at some point in time, humanitarian organizations like ours had stopped thinking of refugees as their primary customers. We had deprioritized the voices of the people that we are here to serve – and that wasn’t good enough. Kuja Kuja, ARC’s response to this issue, is a real-time feedback system that collects, analyzes, and supports clients to take action on real-time customer feedback, helping organizations to design and deliver better services.

With Kuja Kuja, our goal is to create agency amongst customers around the world and to shift people from passive receivers of services to active, discerning, and demanding consumers of them. We do this in two ways: Firstly, we create granular, objective, real time data sets describing customer satisfaction with the services being offered to them, helping to align the decision-making apparatus of humanitarian actors around the voice of their customer. Secondly, recognizing that real time data requires radically reduced response times and new ways of working, we support those actors and the communities in which they operate to access, interpret, and take optimal action on that data.

Discussion Questions:

  1. At Kuja Kuja we focus on analogies to make our approach easier for the humanitarian community to understand. For example, we say that Kuja Kuja is like a FitBit for the Humanitarian Sector, shifting organizations from yearly check-ins on the health of their operations to having a daily pulse of it. What other analogies might we use to effectively communicate our work?
  2. In the private sector, businesses start to fail if customers don’t like their products, and the long-term success of companies often rests on their ability to provide superior customer experiences. However, in the humanitarian space, customer experience of a product or service is rarely considered, despite the protracted nature of many humanitarian crises. How can we ensure that customer experience, the demand from people for dignified services, becomes the relevant standard for judging the efficacy of humanitarian organizations?
  3. Imagine that you are a decision maker in a humanitarian organization, a CEO, a Country Director, a coordinator for Health service delivery, a grant writer, whatever. If you could draw a dashboard of the information that you need on a daily basis to do your job better, what information would be on that dashboard and how would that information be presented. Can you draw some?

What was the impact of this LabStorm? Read the recap here.

Open Contracting Partnership

Open Contracting Partnership is excited to share the results of its third round of measuring our community’s size, reach, and strength with Marc of Keystone. Building on our Labstorm from last year, we’ll explain how we took action based on last year’s results, review this year’s results and what we plan to do about them, and share what we’ve learned over the last three years of taking standardized measurements. We’ll also share how what we learned over these three years informed our new set of indicators, which we will begin tracking under our new 5 year strategy. We’ll welcome all critical feedback about our new targets and measurement plans.
Discussion Questions:

  1. What are the pros cons of social media vs direct communications like emails as measurements of network connections?
  2. How could we better understand and measure the quality of connections? What would you need here to get a similar methodology of tracking standard indicators of movement growth and engagement quality adopted in your org?
  3. How do you separate “tracking trends” form “setting performance targets” based on measurement?

What was the impact of this LabStorm? Read the recap here.

Where We Live NYC

Where We Live NYC is the City of New York’s community-driven process to develop the next chapter of fair housing policy that confronts segregation, fights discrimination, and builds more just and inclusive neighborhoods. The process includes extensive engagement with residents, community leaders, and government partners – including 60+ focus group style “Community Conversations” led by community-based partners in 10 different languages, and a Fair Housing Stakeholder Group that includes 150+ advocates, service providers, researchers, and community leaders who have been engaged throughout the process. Join us in a LabStorm where we share our engagement approach to date and work through upcoming challenges.
Discussion Questions:

  1. How can we best position Where We Live NYC as a national leader on fair housing issues and influence other practitioners?
  2. How can we design a Fair Housing Together Summit that achieves our goals of connecting, educating, fostering dialogue, and closing the feedback loop?
  3. How can we continue building momentum, accountability, and collaboration during the implementation phase of this effort?

What was the impact of the LabStorm? Read the recap here.

Accountability Lab & Feedback Labs

Earlier this month, Accountability Lab and Feedback Labs teamed up to present at the World Bank Data Day – a gathering of World Bank teams working on key data issues from open government data to climate data to human capital and more. As external experts invited to look at ongoing work (i.e. what’s working, what’s not, and what’s coming for country counterparts and partners in developing contexts), we declared “Feedback is the most valuable piece of data you will ever get.”

Throughout the course of the day, we collected data on the attendee’s use of data, and use of constituent feedback. Our conversations highlighted consistent pushback on perceptual feedback as important data. We believe that for feedback to become a movement, it is essential that “data” includes feedback because 1) it may be able to predict outcomes, 2) in some cases, it’s been shown to improve outcomes drastically, and 3) it can also catalyze essential collective action.

Join us at tomorrow’s LabStorm to discuss, “Do we have evidence that feedback may predict outcomes? Is it enough? If not, should we prioritize generating it and how? And, how can the feedback community affect the institutionalization of constituent feedback on a much larger scale?”

What was the impact of this LabStorm? Read the recap here.

Poverty Stoplight

Poverty Stoplight, created by Fundación Paraguaya, seeks to activate the potential of individuals and families to eliminate multidimensional poverty through a self-evaluation tool. The Stoplight is used to support poor families in assessing their poverty levels and identifying and implementing practical solutions for addressing their challenges. Poverty Stoplight has ultimately improved the lives of thousands of families through a process that enables poor families to be the protagonists of their life-changing story.

The Poverty Stoplight is a social innovation tool that includes a metric and a methodology. Program staff work directly with families in poor communities to evaluate poverty levels across a variety of dimensions and indicators. Each indicator is presented in three different scenarios categorized as red for extreme poverty, yellow for poverty and green for non-poverty. Each family selects the scenario most relatable to them for each indicator. Then, the methodology generates poverty elimination life maps that go beyond traditional aid, seeking to bring about changes generated by the families themselves.

Our goal is for every family in the world to assess their multidimensional poverty level. Our main challenge is making Poverty Stoplight the reference of choice for participatory poverty analysis and tailored-made family driven action. We seek to activate millions of families to measure their own multidimensional poverty, and enable them to take actions to overcome it.
Discussion Questions:

  1. How can we promote a system change and global scale from a small, unknown country? (Is it even possible?)
  2. Currently, the families we work with are stakeholders of our paying customer. This model provides steady, but linear, growth. How can the Poverty Stoplight disintermediate and reach families all over the world directly?
  3. We are currently active in more than 20 countries, one of them being the U.S. at a moderate scale. How can we access the US market at a larger scale?

What was the impact of this LabStorm? Read the recap here.

ideas42

ideas42 is a nonprofit organization that creates social impact through the application of insights from behavioral science to public policy challenges around the world. Our work spans a variety of fields, such as education, consumer finance, health, criminal justice, and others. Our projects feature the rigorous application of qualitative and quantitative research methods to diagnose the nature of behavioral impediments to beneficiaries’ actions and decisions, followed by the design and testing of new program delivery methods to “nudge” beneficiaries towards realizing better outcomes for themselves.

We are currently working on improving government responsiveness to citizen-submitted requests and feedback through civic monitoring platforms through applied behavioral science. Civic monitoring platforms allow citizens to submit requests to government officials. Despite their enormous potential, there is mixed evidence on whether they actually improve service provision or governance outcomes. Much attention has been devoted to user engagement as a driver of responsiveness, but attempts to increase engagement have not led to sustained improvements in responsiveness.

If user engagement cannot explain non-response, neither can a lack of resources, policy alignment, or political will. In fact, complaint platforms tend to be well-funded and installed in high-capacity government offices where there is an existing incentive for officials to deliver on their promises. So, why does responsiveness remain a challenge in many cases around the world? Behavioral science may offer a compelling opportunity to improve government responsiveness by improving officials’ performance and service provision through inexpensive interventions that work within existing systems.
Discussion Questions:

  1. Considering the four features of responsiveness–time, accuracy, equity, and satisfaction–should some features be prioritized over others? Will some of these be harder to measure?
  2. Considering the 7 steps taken to resolve requests, should we prioritize our focus on any steps over others? Should we focus on the same steps across cities, or on the steps that are causing the biggest issues for each particular city we partner with?
  3. In what ways does politics bias responsiveness, including how requests are received, tracked, prioritized and resolved? What sort of political incentives exist for those at the bureaucratic level to resolve complaints overall (or some more than others)?

What was the impact of the LabStorm? Read the recap here.

RNW Media

RNW Media is a Netherlands-based NGO, funded by the Dutch government and selected foundations, that gives young people a voice in repressive societies across the Middle East as well as North and Sub-Saharan Africa. We work with local partners to create and nurture large online communities of youth to allow them to take part in robust, safe, and respectful discourse. In doing so, we support freedom of expression, social inclusion, and civic participation. But we aspire to something even larger: citizen feedback at scale. That is, we want to close the loop between the young people we reach, and what they have to say about their lives and societies, and the decision-makers and opinion leaders (including government officials and civic leaders) who need to hear and act on their views. Our efforts are gaining traction. But, with greater democracy, equity, and justice as our goals, we want to bring our model to scale. We believe the field of constituent feedback offers critical insights to inform our work.

Discussion Questions:

  1. We believe our experience in soliciting citizen feedback is analogous to constituent feedback. Do you agree (differences/similarities)? If so, how do we apply the core principles and best practices of constituent feedback to our work?
  2. How do we ensure integrity – that the feedback is representative or contextualized with transparency; that it helps and does not harm the population we’re representing; and that we are accountable to our audiences so they continue to share their views?
  3. How can we measure success of our citizen feedback efforts – ultimately, it’s through improved laws, regulations, government practices, but what are the interim indicators? And how do we best manage and account for limitations for channeling citizen feedback in repressive societies?

What was the impact of the LabStorm? Read the recap here.

2018

Development Gateway

Linking Data and Decisions: How Can We Use Data to Influence Government Behavior?   

Development Gateway (DG) delivers digital solutions for international development, creating tools to make development data easier to gather, access, use and understand. DG works across sectors to create tools that help institutions collect and analyze information; strengthen institutional capability to use data; and explore what incentives, structures, and processes are needed to enable evidence-based decisions. By focusing on a decision-centered approach to the use of data, DG helps to build institutions that are accountable, better able to listen and respond to the needs of their constituents, and are efficient in targeting and delivering services that improve lives.

In this LabStorm, DG will explore how to integrate development data into Haiti’s budget preparation process, further institutionalizing the use of data as a critical part of planning for the country’s development. DG built a custom Aid Management Platform for Haiti shortly after the 2009 Earthquake to help support the government’s goals of tracking and managing foreign aid flows. Current activities focus on deepening investments made in the Aid Management Platform, with a focus on encouraging the use of the Platform’s data across sector ministries and among the donor community, as well as linking the Platform to other existing government systems.

Discussion Questions:

  • How can we encourage the use of development aid data in the planning and budgetary preparation of Haiti’s government agencies? How can we make data usable/useful for government audiences?
  • How can we navigate the political (and technical) challenges of linking different data systems? How do we get different system “owners” to see system linkage as beneficial?
  • How can we use data to strengthen collaboration in the donor community and reduce the burden on government agencies?

What was the impact of this LabStorm? Read the recap here.

Environmental Defense Fund

Fishing for Solutions: Can Feedback and Social Media Create Scalable Sustainability in Fisheries?  

Environmental Defense Fund, a leading international nonprofit organization, creates transformational solutions to the most serious environmental problems. EDF links science, economics, law, and innovative private-sector partnerships. By focusing on strong science, uncommon partnerships and market-based approaches, EDF tackles urgent threats with practical solutions.

In this LabStorm, EDF will explore how to quickly build a critical mass of people who want to see and create sustained change to fisheries world wide, and magnify their impact. EDF Oceans works to create sustainable fisheries that provide more food, more prosperity and greater environmental wellbeing for people, their communities and the planet. We work to end overfishing by deploying science-based catch limits, economic incentives and technological innovations to return our oceans to abundance and ensure that people and nature prosper together. By working in 12 key countries, which together make up 61 percent of the global catch, we see a future where we can bend the arc of progress toward sustainable fisheries that deliver fish for life.

Discussion Questions:

  • How can we learn to deploy social media and other mobile platforms/tools to accelerate the adoption of sustainability in fisheries?
  • How can we better understand the core kernels of fishery management knowledge that could empower a fisher to take action on their own behalf while also helping the long-term health and sustainability of their fishery? How can we learn to communicate the knowledge in a more accessible way that can overcome language and/or formal education barriers?
  • How can we disseminate that knowledge to the growing numbers of people we hope to attract? How can we support our early adopters to be change agents for sustainability in fisheries?
  • How can we use data to strengthen collaboration in the donor community and reduce the burden on government agencies?

What was the impact of this LabStorm? Read the recap here.

Impact Experience

Description:
Impact Experience – building bridges between funders and marginalized communities 

Impact Experience builds lasting relationships between investors, philanthropists, innovators, entrepreneurs and community leaders — linking vision with action and directing investment to solve society’s greatest challenges. By invitation, they go into some of the most disadvantaged communities to facilitate convenings designed to generate trust, enhance strategy and accelerate transformation. Their goal is that together we can ensure every community has the access, relationships, and resources they need to reach their full potential and contribute to a more inclusive, sustainable, and prosperous world. They have a particular focus on implicit bias and increasing proximity to provide more context in the process of engaging in marginalized communities.

We are currently trying to work out how to maintain the depth of engagement in the communities that we are working in as well as scaling to engage in an increasing number of communities. We are considering different models such as a train the trainer structure and ambassadors to be able to engage an increasing number of people and communities in our work. We are interested in exploring what has worked and not worked with similar initiatives.

 

What was the impact of this LabStorm? Read the recap here.

Transparency & Accountability Initiative

Description:
Supporting funders to strengthen the impact of data for transparency and accountability

Transparency and Accountability Initiative (TAI) is a donor collaborative that supports members to work together to improve their grantmaking practices and boost collective impact around four focus areas: Data Use for Accountability, Strengthening Civic Space, Taxation and Tax Governance, Learning for Improved Grantmaking.

Global funders invest significant resources in data for transparency and accountability (broadly, “governance data”), but what has been the overall impact, and how can we do better? As part of TAI’s focus on promoting data use, we are helping donor members identify key barriers to data use and impact, and improve the targeting and effectiveness of governance data funding. This year, we reviewed lessons learned on the outcomes of governance data funding to date, and developed basic guidance for funders and grantees to consider when designing governance data programs.

We are seeking insight on how to put this guidance into action and ensure relevance and uptake among funders and grantees.

Discussion Questions:

  1. Are these questions useful/relevant for your organization/work and are there tweaks you would recommend? How might accountability actors take these considerations on board?
  2. What might be effective strategies or approaches to influence funders, particularly to promote engagement with data users and building feedback loops into governance data activities?
  3. How can we encourage funders and grantees to measure progress on data use and make the impact of data investments more apparent, perhaps drawing from good practices from feedback initiatives?

What was the impact of this LabStorm? Read the recap here.

Accountability Lab

How Can Data Build Trust Between Communities and Government?

What was the impact of this LabStorm? Read the recap here.

TI- Ukraine & Open Contracting Partnership

Do(Zorro)ing Open Contracting Right- Citizen Monitoring in Ukraine

What was the impact of this LabStorm? Read the recap here.

Praeket.org

Long Distance Relationships: How Can We Use Feedback to Build Effective Remote Working Cultures

What was the impact of this LabStorm? Read the recap here.

City of Austin

Breaking Through the Illusion of Transparency: Homelessness Services

What was the impact of this LabStorm? Read the recap here.

Irvine Foundation

Inclusivity & Feedback: How can we practice inclusive feedback without experiencing analysis paralysis?

St.Thomas Recovery Team

What will it take to build 4,000 homes on a rock?

Nurse Family Partnership

Identifying a “threshold” for prioritizing feedback, tracking feedback, and designing tools

Transparency & Accountability Initiative

Tools to Open Up: A Compendium of Donor & CSO Strategies to Combat Shrinking Civic Space

 

What was the impact of this LabStorm? Read the recap here.

Care Opinion

Online feedback to support learning and change in healthcare

Polaris

Engaging Survivors in Strategies to Prevent and Disrupt Human Trafficking

What was the impact of this LabStorm? Read the recap here.

VocalEyes

Filling the Democratic Void and Enabling Mass Participation

What was the impact of this LabStorm? Read the recap here.

Memria

Stories and Philanthropy: Using first-person audio accounts to improve feedback

What was the impact of this LabStorm? Read the recap here.

Open Contracting Partnership

Developing (good) indicators for advocacy organizations from social network analysis

This Is My Back Yard

Scaling Up: So you have a proof of concept – now what?

 

What was the impact of this LabStorm? Read the recap here.

What Went Wrong?

“What Went Wrong? Citizen Journalism on Foreign Aid” – Building a feedback loop between journalists and aid recipients

 

What was the impact of this LabStorm? Read the recap here.

Global Delivery Initiative

Delivery Labs, Ethiopia

 

What was the impact of this LabStorm? Read the recap here.

ITVS

DocSCALE platform: a new digital survey approach that uses peer-ratings to surface the wisdom of the crowd

 

What was the impact of this LabStorm? Read the recap here.

Keystone Accountability

Opeartionalizing a Feedback-based Business Model

 

What was the impact of this LabStorm? Read the recap here.

2017

Ground Truth Initiative

Open Schools Kenya

First Book

Needs Index

Seigel Family Endowment

In the Loop: Building a candid feedback cycle between grantees and funders

What was the impact of this LabStorm? Read the recap here.

Journimap

Know your clients!

University of Michigan

Using GIS and Social Autopsy to Drive Local Innovations to Improve Maternal and Newborn Health in Rural Ghana

Civic Hall

Can a “Yelp” for Homelessness Unlock Transparency?

What was the impact of this LabStorm? Read the recap here.

Makerble

Making it easy to measure impact

What was the impact of this LabStorm? Read the recap here.

GlobalGiving

How do we best listen to refugees?

What was the impact of this LabStorm? Read the recap here.

Transparency International

Ambient Feedback

What was the impact of this LabStorm? Read the recap here.

New Philanthropy Capital

User-centered Philanthropy

What was the impact of this LabStorm? Read the recap here.

Ground Truth Solutions

The feedback chain is only as strong as the weakest link

What was the impact of this LabStorm? Read the recap here.

I Know Something

Unlocking the Insights in Personal Stories

What was the impact of this LabStorm? Read the recap here.

Carvajal

Can feedback collection improve education intervention?

What was the impact of this LabStorm? Read the recap here.

Village X

Feedback Signal in Least Developed Countries

GlobalGiving

Constituent-driven Program Funding

What was the impact of this LabStorm? Read the recap here.

International Planned Parenthood Federation

Provide a Framework to Elicit Meaningful Feedback

What was the impact of this LabStorm? Read the recap here.

Dalberg

Measuring the Impact of Advocacy

What was the impact of this LabStorm? Read the recap here.

Gallup

Ask the Right Questions, Reach the Right People

What was the impact of this LabStorm? Read the recap here.

2016

Global Delivery Initiative

DeCODE

What was the impact of this LabStorm? Read the recap here.

Makaia

Managing Constituent Data

Charity Navigator

Crowdsourcing Nonprofit Evaluations

INGO Accountability Charter

Building the Accountability Frame of the Future

Unpack Impact

Decolonized Design, Part 2

What was the impact of this LabStorm? Read the recap here.

Development Gateway

Results Data Initiative

What was the impact of this LabStorm? Read the recap here.

Unpack Impact

Uncovering a Code for Decolonized Design

What was the impact of this LabStorm? Read the recap here.

Keystone Accountability

Can website data help measure our impact?

What was the impact of this LabStorm? Read the recap here.

Open Contracting Partnership

How can email metadata tell you who to stop emailing?

What was the impact of this LabStorm? Read the recap here.

LabStorm Recap Blogs

We distill the takeaways from each LabStorm session into a blog post recap. Read recaps from recent LabStorms here.