At Feedback Labs, we have started investigating the qualities of successful social movements. Among the tactics of the past decade’s most successful advocacy campaign – Marriage Equality – affirmative messaging was the most successful. The turning point began with a de-stigmatization of the LGBTQ community. Was this messaging purely effective storytelling? How did so many organizations coalesce around this tactic, while simultaneously supporting other threads of work? Can this tactic be executed in other movements, or is it unique to the marriage equality context? To answer these questions, we need a better impact measurement tool that gets to the root of what makes advocacy campaigns successful.
Dalberg Global Development Advisors brought a prototype to our latest LabStorm to establish such a framework for quantifying advocacy impact. Measuring the effectiveness of advocacy not only contributes to improved strategies and tactics, but it creates an environment where grantmakers can confidently invest in organizations that are most effectively accomplishing their goals. But measuring the effectiveness of advocacy work is really, really hard to do. Some of the reasons are obvious: multiple actors working towards a single goal can dilute the correlation between one organization’s set of advocacy activities and an outcome; “effective” advocacy is a long and arduous process with sometimes nebulous milestones (such as changing hearts and minds); and vanity metrics can lead to flawed decision making while M&E often incentivizes limited measurement around accountability. This all can result in a narrow set of data that focuses on short-term, reportable outcomes that “get the job done,” glossing over activities that could actually maximize long-term change.
In short, advocacy (and similar industries like lobbying, journalism, and diplomacy) often lacks a way to trace influence. There are lots of different types of advocacy, and different types of campaigns seeks different results. During this LabStorm, Matt Frazier from Dalberg focused specifically on advocacy related to policy change. Dalberg proposes a framework that lets advocacy organizations measure what matters: learning.
Dalberg Framework: Nuts and Bolts
Advocacy organizations need an effective, holistic assessment that takes process and outcome into consideration. Dalberg has created a framework for analysis of advocacy pointed at policy change: strategic positioning, selection of tactics, and tactical effectiveness. An organization is likely to be effective when it demonstrates success in each category. Organizations that are weak in any one of them will likely encounter road-blocks in pursuit of their policy objective.
- Strategic Positioning: Is the organization well-positioned strategically to bring about a specific policy change? This stage requires organizations to articulate the policy outcomes they seek, and their plan to make that change happen. By creating opportunities to share their strategy, donors are more capable of realistically analyzing the organization’s likelihood for success.
- Selection of Tactics: Has the organization chosen tactics to best achieve the desired policy change? Maximum impact demands an organization’s ability to adapt to changing circumstances. How an organization chooses, and revisits the choice of, tactics is equally as influential as the tactics themselves.
- Tactical Effectiveness: Is the organization effective at using those tactics to achieve that policy change? The effectiveness of an organization’s selected tactics highlights their organizational strength, and ultimately generates lessons for future campaigns.
We found this framework to provide a useful checklist for organizations to engage with as they plan their advocacy. A topic of conversation that quickly arose was the assumption of consensus. LabStorm attendees thought the following three questions worth exploring more deeply as Dalberg moves forward with piloting this framework:
- How can learning be incentivized? Often, learning is viewed as a burden. Matt Frazier, the LabStorm facilitator from Dalberg who has worked in advocacy before, shared that often the last thing an advocacy organization wants to do after either a successful or failed project is write about it. However, writing allows for learning and reflection, which is often overlooked as the docket of pressing issues rarely shortens. Despite knowing that reflection can be invaluable to the success of future campaigns, it often feels like learning from your own work provides too small a data set (and sucks time and resources) to be useful. But what if learning is made part of a bigger conversation? It may become something that can be done across organizations and causes, simultaneously relieving a single organization from having to learn everything and incentivizing sound reflection for the greater good. Advocacy organizations may be able to engender a greater sense of trust and accountability if their learnings are habitually made public, and if those making the greatest strides receive greater status in the community.
- Will this heuristic create a trusted indicator for funders? Advocacy organizations are, more often than not, pressed for resources. This means that adding a checklist, even if it’s a proven indicator of campaign success, will need to be a financially sustainable endeavor. How do you get funders onboard to trust a new impact measurement? GlobalGiving recently shifted to an analogous system: evaluating by proxy the effectiveness of organizations in the marketplace by analyzing their orientation towards learning. The tactics to evaluate an orientation towards learning – measuring the use of online webinars, rewarding the use of learning tools, and giving credit for submitting stories of failure – are increasingly trusted by the broader GlobalGiving donor community as indicators of effectiveness. This pilot similarly needs to show funders that the Dalberg framework 1) prompts the right questions, 2) gathers evidence outside the typical advocacy narrative, and 3) engages organizations in a learning mindset.
- When does it benefit, and when does it hurt, to measure the impact of advocacy? Advocacy to change policy is a hard thing to do and it often takes a lot of tries before getting it right. This means that, in a world not necessarily oriented towards learning, being transparent about your impact may hurt funding prospects for some organizations. Highly emotional issues that pull at a donor’s heartstrings are often more successful when illustrating the victims of bad policy, rather than highlighting a quantitative rundown of impact. Asking too many questions, or following this framework, could actually decrease a donor’s willingness to give.Therefore, it’s important to know when and how to apply this framework. When does not measuring the outcomes of advocacy work result in better fundraising outcomes? Should such organizations still follow Dalberg’s three categories of effectiveness? How do you know?
We are excited to learn more from Dalberg as they create a common language around strategies and tactics of successful advocacy, and continue refining and strengthening this framework. Want to stay involved? Reach out to us at firstname.lastname@example.org if you’d like to learn more from Matt and his team at Dalberg about their work and ongoing pilot. Contribute your thoughts below and check back here, on the Feedback Labs Blog, for updates on how we help drive this momentum forward.
Matt Frazier is a Partner in Dalberg’s Washington D.C. office and an expert in advocacy strategy and implementation. He has worked on projects in health, agriculture, supply chain and procurement, and monitoring and evaluation. He has experience working with nonprofits, foundations, governments, the private sector, and national, state, and local governments. Prior to joining Dalberg, Matt was Director of Operations at the ONE Campaign, a global advocacy organization. There, his portfolio included global strategic planning, project management, and organizational development. He also crafted and implemented ONE’s new advocacy member engagement strategy.
LabStorms are collaborative brainstorm sessions designed to help an organization wrestle with a challenge related to feedback loops, with the goal of providing actionable suggestions. LabStorms are facilitated by FBL members and friends who have a prototype, project idea, or ongoing experiment on which they would like feedback. Here, we provide report-outs from LabStorms. If you would like to participate in an upcoming LabStorm (either in person or by videoconference), please drop Sarah at note at email@example.com.