Creating Fit-for-Purpose Peacebuilding Evaluation: Three Key Investments

18 August 2016   ·   Andrew Blum

Monitoring and evaluation (M&E) processes for conflict prevention programs need to be adapted to their unstable and fluid contexts. Donors should build closer partnerships with implementers, provide adequate resources for (shared) data collection, and develop indicators to make credible long-term claims.

In this post, I will address the following question: how do we develop strategically and politically relevant monitoring and evaluation (M&E) processes for conflict prevention and peacebuilding programs?

Keeping in mind the specific needs of larger donors, I will assume here donors want two things: demonstrably more effective programs, and the ability to demonstrate accountability in the way program funds are used.

Conventional M&E is unfit for conflict scenarios

Since both peacebuilding and M&E are big, sometimes sprawling, topics, I want to focus on one particular issue: the context in which peacebuilding programs take place. These contexts are, by definition, unstable and fluid. This fact is obvious, even a truism, but its implications for how we design M&E systems are large and often go unacknowledged. For programs to stand a chance of being successful in these contexts, organizations have to implement their programs in a flexible, adaptive way.

A traditional M&E approach – that starts with a finalized and static project design, monitors inputs, activities and outputs during program implementation, and concludes by conducting a large-scale evaluation after the program is completed – is ill-suited to this type of fluid, rapidly changing context. What is needed instead are M&E processes that accompany the project throughout its implementation. M&E that creates continuous, evidence-based learning and feedback loops to guide implementation, inform shift strategy, and track progress toward the project’s goals, even as these goals may evolve.

What does this mean for donors? What changes should be made to create new, fit-for-purpose M&E systems in conflict contexts? In answer to these questions, I would argue that donors should focus on three kinds of investments.

Build a true partnership between donors and implementers: adaptation with accountability

In a 2014 online piece for Foreign Policy, I argued that, for peacebuilding programming, donors need to demand a different kind of accountability from implementers than for more traditional development programs. Specifically, they should ask each implementer the following three questions:

  • What results did your program achieve?
  • How did you program adapt to the context in which was implemented?
  • What evidence do you have that defends your decisions regarding how the program adapted?

Using these questions acknowledges the unstable nature of conflict environments and allows for flexible implementation of projects. It does so, however, in a way that still allows donors to hold implementers accountable. In effect, donors are saying to implementers “please, adapt as needed, but show us with evidence how and why you are adapting.”

Working in this way requires a different and closer relationship between donors and implementers. If I had to guess, I would say donors spend roughly 80% of their time on a project prior to it launching (designing the solicitation, reviewing proposals, conducting due diligence, and so on) and only 20% actually monitoring and overseeing a project. This type of model will not work if we want to create effective programming in conflict contexts.

Instead, donors need to build a true partnership that involves closer interaction throughout the course of the project. This in turn requires investment both in the time and effort it takes to establish trust and build a deeper relationship (for instance, trips to donor headquarters for country directors), and for the effort it takes to gather and use evidence to justify shifts in programming strategies. Effective, accountable programming in conflict areas requires creating more rapid feedback loops, where evidence is continually used to adjust program strategies.

Invest in data collection and analysis for evidence-based adaptation

Effective, accountable programming requires feedback loops, and feedback loops require rigorous and cost-effective data collection. If we expect implementers to respond flexibly to fluid, unstable conflict contexts, there must be rigorous data collection throughout the project, not just at the evaluation stage, as is often the case.

Investing in data collection should take two forms. First, project budgets should include more resources to implement effective data collection. If we are asking implementers to move beyond simple input/output tracking, resources need to be provided to support this shift.

Second, donors should invest additional resources in “public goods” that create general capacity for effective data collection. Given the nature of data collection that is required, and the difficulty of collecting data in conflict contexts, it is unrealistic to ask each implementer to create their own fully-fledged data collection and analysis capacity. These public goods could include, among other things, shared data collection tools and technology, shared data collection capacity (for instance, a pool of trained enumerators), common monitoring and indicator frameworks, and common data sharing, analysis, and visualization platforms.

For instance, at my previous organization, the United States Institute of Peace, the United States Agency for International Development provided resources for an effort, called the Initiative to Measure Peace and Conflict Outcomes (IMPACT), to develop a common monitoring framework and data collection strategy for all US government funded peacebuilding work in the Central African Republic. This effort is an experiment, but if it proves successful, it will provide one model for how we can move beyond individualized project monitoring to more shared data collection and analysis approaches.

Make credible long-term claims: what is the cholesterol of peacebuilding?

To justify their funding, donors need to show that their projects are creating meaningful results. The difficulty is that short-term monitoring data cannot demonstrate larger-scale impact. On the other hand, larger-scale evaluations that can provide evidence of broader impact are often ill-suited to rapidly changing conflict contexts. To demonstrate a way out of this dilemma, it is useful to use a health-related analogy. Put simply, peacebuilding needs to find its cholesterol. Imagine a program that is designed to reduce heart disease. One way to do this would be to implement the heart disease prevention activities and then wait 30 years to see if rates of heart disease are less than would otherwise be expected. Studies like this are not unheard of, but not common. Instead, the medical field has developed risk indicators for heart disease, like cholesterol. As a result, they are able to measure a decrease in cholesterol in the shorter term and make credible claims about a decrease in the risk of heart disease.

It is often said that donors should take a long-term approach to peacebuilding. However, it is not politically feasible for donors to adopt the “act-and-wait-30 years-for-results” approach. Instead, donors should keep the long term in mind, but invest in either conducting and/or leveraging the type of research that allows them to make credible claims in the shorter term – the same kind of claims that cholesterol allows doctors to make.

The good news is that a strong, evidence-backed consensus is emerging on what the cholesterol, or cholesterols, of peacebuilding might look like. This consensus is crystallized in Goal 16 of the Sustainable Development Goals and the Peacebuilding and Statebuilding Goals. It provides at least the promise of making credible long-term claims – like “our program has increased people’s security and access to justice, therefore, we have decreased the risk of a return to violent conflict” – based on shorter-term monitoring of results.

To realize this promise, donors need to invest in two types of research. Again, the cholesterol analogy is apt. The first type of research would improve our ability to assess and measure interim results, like access to justice. This type of research would improve our ability to credibly make the shorter-term claim – our program improved access to justice. The second type of research would improve our understanding of the mechanisms by which improved access to justice leads to less chance of violent conflict. This type of research would enhance our ability to credibly make the longer-term claims – by increasing access to justice we have decreased the chance of violence in the future and improved the chance of building a more peaceful society.

The time is ripe for new approaches

In my experience, there is enough frustration about the current state of monitoring and evaluation for peacebuilding that donors and implementers are willing to experiment with new approaches. The Global Learning for Adaptive Management collaboration between the United States Agency for International Development and the British Department for International Development is one current example. As these experiments are launched, donors will need to move beyond the thinking and reflection stage and find concrete things in which to invest. The best place to start? Invest in partnerships, in data collection, and in research.

Evaluierung English

Andrew Blum

Andrew Blum, PhD is the Executive Director of the Joan B. Kroc Institute for Peace and Justice at the University of San Diego. Formerly, he served as Vice-President for Planning, Learning, and Evaluation and a Senior Program Officer for Grantmaking at the United States Institute of Peace in Washington DC.