Recent cross-country panel regression analyses have revived the debate over the effectiveness of development aid on economic growth (World Bank 1998; Burnside and Dollar 2000; Easterly et al. 2004; Roodman 2007).
These arguments have been fuelled by results-based management of development aid, such as the Paris Declaration, and naturally lead to a call for more rigorous impact evaluation at the project level. The importance of impact evaluation is frequently stressed by the international aid community (World Bank 2006 and Savedoff et al. 2006; Asian Development Bank 2006; Banerjee 2007).
Bilateral development aid institutions, however, have been slow to respond to requests for impact evaluation despite being in general agreement with the targets of the Millennium Development Goals and their stated commitment to pay due attention to aid effectiveness. Why are bilateral development aid institutions slow to accept the importance of impact evaluations? In order to answer this question, this article examines the experience of the Japan Bank for International Cooperation’s (JBIC) rigorous impact evaluation of Japan’s ODA projects and draws some lessons relevant to other donor institutions. We show that the particular features of Japan’s aid, and the aid environment in general, are impediments to adopting rigorous impact evaluations on a full scale. We also examine what the introduction of rigorous impact evaluation would mean for bilateral aid.
While the proponents of rigorous impact evaluations are drawing growing attention in the evaluation community, one rarely sees a discussion of how one can learn from past aid programmes in a systematic manner. We point out that rigorous impact evaluation alone cannot draw lessons from past aid programmes, and argue that we must understand the mechanism that produced the measured impact to better predict future aid impacts. Summative aspects of rigorous impact evaluations have been discussed intensively while formative aspects have long been neglected. Based on the Bayesian statistics/econometrics literature, we will discuss one method of inferring such mechanisms with data. We show the need for sharing data and evaluation experiences among evaluators, and the need for the evaluation community to work more closely with the research community.
After the introduction to this article, Section 2 examines the characteristics of Japan’s ODA and describes how the aid community in Japan reacts to requests for rigorous impact evaluation. Section 3 introduces evaluation designs, evaluation results, and other findings of rigorous impact evaluation recently undertaken by JBIC in Bangladesh and Peru. In Section 4, we use the Bayesian framework to show how rigorous impact evaluations can feed back into the aid projects, and make the case for closer collaboration between the international aid community and the research community. Section 4 concludes.
This article comes from the IDS Bulletin 39.1 (2008) Learning to Evaluate the Impact of Aid