Many of the advances in climate science over the last few decades have been manifest in climate models.
Realism and skill have increased over time – though not necessarily proportionally – as more processes have been parameterised. Land surface processes are among the most recent additions and these lend themselves naturally to the simulation, within the climate model, of derived quantities such as crop yield (Osborne et al. 2007). This is part of a broader extrapolation of climate variables, which need not happen within the climate model, into impacts such as those on agriculture (Challinor et al. 2004) and health (Thompson et al. 2006). Climate impacts research is a growth area, with emerging methodologies and resources (Challinor et al. 2008a).
A particular area of growth in climate science over the last decade is in the use of ensembles, where one or more climate models are used to quantify the inherent uncertainty in climate prediction (Collins and Knight 2007; Lejenäs 2005). This is the result of both ongoing increases in computer power and the realisation that increased realism alone is insufficient to maximise forecast skill. There is the potential to increase the skill of seasonal forecasts and also to produce forecasts seamlessly at a range of timescales (WCRP 2008), but how useful are these developments likely to be in supporting pro-poor adaptation?
Climate impacts research can increase the relevance of climate forecasts. However, as explained in the third section here, care should be taken to strike the right balance between relevance and accuracy. Also, models of climate impacts should be used with appropriate consideration of their limitations. There can be a tendency within parts of the scientific community to view models as the repository of scientific knowledge, when in fact the fundamental source of knowledge is people: the scientists themselves, practitioners and stakeholders. This paradigm, whereby models are trusted principally because of the many independently tested equations they contain, can result in over-confidence in results. When it comes to equations in a numerical model, more is not necessarily better. A model with many equations will have many associated parameters.
Finding the correct value for every single parameter is unlikely, especially when parameters are not directly measurable, and/or represent complex interactive processes with non-reproducible results (both of which often occur in biological systems). This difficulty in constraining a large set of parameters with only limited observations increases the risk of getting the right answer (model output equals observations) for the wrong reason (the model has been overly ‘tuned’).
Consider as an example, a crop model with yield as its principal output. The model may perform well when the many parameters of the model are tuned to observed yields, while performing poorly in the absence of this tuning – exactly the circumstances when the model is needed and implicitly trusted (since there are no observations of yield). Having fewer parameters can reduce the risk of over-tuning (Cox et al. 2006). This pragmatic approach can result in a number of models that give a good fit to observations, so that there may be more than one acceptable model (Beven 2006). In crop modelling, this approach has the advantage that it simulates at a systems level that is near to crop yield, which is the variable of interest (Sinclair and Seligman 2000). Whichever approach is taken, model tuning (or calibration, as it is more commonly known) is an important step in producing a viable model. However, as outlined above, and as explored in more detail by Challinor and Wheeler (2008a), the relationship between model complexity and calibration should always be carefully explored.
This article comes from the IDS Bulletin 39.4 (2008) Towards a Science of Adaptation that Prioritises the Poor