The latest IDS Bulletin ‘The Millennium Villages: Lessons on Evaluating Integrated Rural Development‘ brings together a series of reflections on integrated development and how best to know whether it works and why. It certainly doesn’t provide all the answers; rather it’s meant to stimulate debate. Some things you might agree with, others you might not. Hopefully, it is at least thought-provoking.
International development is full of jargon and the next big thing. Indeed, there’s a lot of buzzwords around development. Concepts like integrated development, synergy, interconnectedness and multi-sectoral projects offer much appeal in this complex world. They’re intuitive concepts for many of us, as rarely do we view our own lives as a series of unconnected, sector-based silos. Why would those living in low income countries be any different?
But, do we seriously test these concepts? Do we subject them to critical rigour?
Exploring integrated development, past and present
In the first article, Masset looks back at 40+ years of integrated rural development; projects now much debunked from 1970s and 1980s to newer forms underway in more recent times. It’s a story of improving evaluation, yet without any robust testing of whether integration actually produces the synergistic effects it claims.
Assessing value for money
In the second and third articles, the authors look at whether we can assess such synergy from a cost-effectiveness point of view. The logic goes that this is a good test of synergy: if these effects are real and significant (i.e., ‘achieving more together than doing the same things separately’) then we should expect the benefits to far outweigh the costs of integration. Masset et al. undertake a systematic review and find a lack of such examples in the literature – admittedly, in part, because this is an unorthodox use of cost-effectiveness analysis. The authors offer four possible methodologies that provide at least a partial solution: cost-consequence analysis; cost-apportionment; cost-utility analysis; and, cost-benefit analysis. Acharya and Hilton, apply cost-consequence analysis to the Millennium Villages Project, and show that because such methodologies are only a partial fit, there is a greater burden than normal on interpreting the results; it’s not simply enough to take the calculations at face value. Rather, it is argued, there needs to be greater consideration of the specific project context, the use of comparisons with standalone interventions, and the time horizons of the impact evaluation.
The fourth article then goes back to pick up another theme from Masset’s first paper: to be able to robustly test integrated approaches, there needs to be a mid-range theory of synergy. A mid-range theory is one that lies between all-explaining grandiose theory (like on poverty traps) and the detailed theories of specific interventions – something that is not so context-specific, yet allows us to generalise and learn lessons. While individual projects have theories of change, often there is a missing level of abstraction with integrated projects; something that is needed to explain how synergy works, and under what circumstances. The article by Jupp and Barnett explores this challenge with the example of the Millennium Villages Project. The paper argues for an abductive reasoning approach to theories of change – one that is best able to draw from local realities alongside evaluator’s and project staff’s own theories, and thus provide new insights about the lived reality of complex, multiple interconnections.
Value of immersion
This theme is further expanded in the fifth article, where Jupp et al. explore how different realities are important to understanding integrated projects, and the role that immersion can play. The authors consider the benefits of going beyond ‘invited spaces’ and entering into their lived reality (i.e., of those who are meant to benefit). This is something that is so often undervalued in development evaluation. Even with the best intentions, ‘participation’ in reality is often still on our (the evaluator’s) terms, in spaces we have legitimised, and constrained by our agendas and timeframes. Rarely do evaluations work in pre-existing local spaces and at a pace of people’s ordinary lives.
In the final two articles, the Bulletin begins to take this to its logical conclusion: if you try to do ‘everything together at once’ then it becomes virtually impossible to untangle and test robustly what has worked and how. The sheer complexity of the interconnections, the lack of knowledge on how synergy actually works and the unknown realities of how people respond and interact, all interplay to create a mighty evaluation challenge. Indeed, a recent systematic review of integrated development by Ahner-McHaffie et al. shows that while integrated projects can work and even have long-lasting effects on poverty reduction, there is little evidence to prove that this is due to integration itself. The jury’s still out.
So, these last two articles draw on a couple of examples from FHI360. Burke et al. shows a neat way to test integration: firstly, integrate just two interventions (rather than many), and then use different treatment arms to test with both, with one (of each type), and without any. This is a more sophisticated take on the ‘with’ and ‘without’ approach. It all seems relatively simple (even if the practice is of course more complicated and costlier). Namey et al. discuss this further and go on to provide a contrasting tale of a similar approach to integrating just two interventions. This time it is a more sobering story. Reality is indeed messy, and integration itself provides new challenges, including the two interventions operating in completely different ways along different timelines. Even the practicalities of when to measure are done according to different time horizons; with family (social) strengthening and household (economic) strengthening operating in different ways.
Lessons for the future
So, what do we conclude? Integrated approaches to development are often untidy and complex, but then the real world is messy and complex. Alongside interconnected, multi-sectoral and complex-aware approaches being advocated to help reach the Sustainable Development Goals (SDGs), we shouldn’t shy away from the need to critically and robustly test what we do. And, while for some integrated interventions the only option might be to gather rapid insights and feed it back to learn and adapt, there are many instances where we should evaluate to prove what works and why.
We offer four lessons on how to do this better:
- Develop more specific (i.e. empirically testable) mid-range theories about how different activities and interventions are expected to synergistically interact.
- Focus on narrower combinations of (say) just two interventions that are being integrated, with the aim of producing evidence that has more portable lessons for similar approaches to integration.
- Where possible, robustly test these different combinations (i.e., with integration, with single interventions only, without). Or, where this is not feasible, to view evidence generation as a longer-term endeavour over decades by sequencing a range of observational (exploratory) research studies until narrower combinations of two interventions become testable with more robust evaluation designs.
- Apply suitable designs to assess cost-effectiveness, drawing on cost-consequence analysis, cost-apportionment, cost-utility analysis or cost-benefit analysis – and collect the necessary data from the outset of the initiative. But, because of the challenge of applying such techniques for purposes beyond their original intention, pay greater attention to guiding decision-makers through the interpretation of such findings.
To conclude, to ignore integrated forms of development as simply the resurrection of a past (and now irrelevant) fad is to ignore the knowledge that we have gained since. Whilst there seems to be no place for massive Integrated Rural Development projects of times gone by, there is emerging evidence around important combinations: for example, combining family strengthening and the household economy (Namey et al.); HIV prevention education and economic strengthening amongst youth (Burke et al.); agriculture with nutrition; governance and food security; water with education interventions; and so on.
More than the Millennium Development Goals, the SDGs imply a need to seriously and critically consider interconnections and the potential for integrating approaches to development. While complexity and interconnectedness seem like the now current vogues, we hope this Bulletin offers some pause for thought: can we better prove and improve from what we already know around integrated development, rather than forget the lessons of the past?
Chris Barnett is Technical Director at Itad and editor of the IDS Bulletin ‘The Millennium Villages: Lessons on Evaluating Integrated Rural Development’.