Realist evaluation asks ‘how and why do interventions work or not work, for whom, and in what circumstances?’ It holds promise as an approach that can help evaluate complex programmes, and provide nuanced insights to guide decisions about rolling out, scaling up, or trying out interventions elsewhere.
This CDI Practice Paper, by Melanie Punton, Isabel Vogel, Jennifer Leavy, Charles Michaelis and Edward Boydell, presents lessons from four large, multi-country realist evaluations of complex interventions conducted by Itad since 2013. It argues that realist evaluation can add value by enhancing the clarity, depth, and portability of findings, helping evaluators deal with context and complexity in pragmatic ways, and providing helpful tools and lenses for implementers to critically appraise their programmes and generate learning. However, novice realist evaluators face a number of potential pitfalls, especially in large-scale evaluations. This paper shares lessons on how Itad has navigated these challenges, which may be helpful to others working in similar contexts in international development and beyond.