Opinion

Evaluators in Africa (unknowingly) rise to the challenge of complexity

Published on 11 April 2017

Marina Apgar

Research Fellow

At the opening plenary session of the 8th conference of the African Evaluation Association, the many and highly distinguished panel members reminded us of the grand evaluation opportunity that brought us together under the conference theme – Evaluation of the Sustainable Development Goals (SDGs): Opportunities and Challenges for Africa. Panel members encouraged the 500+ (my estimate) members of the global evaluation community in the audience to embrace the exciting evaluation challenge of the progressive global human rights agend

Evaluation at heart of debate about ‘truth’ and ‘evidence’

As one representative of the American Evaluation Association suggested, the current political situation in the USA is bringing to light and reminding us of the role evaluators who care about (real) development impact can and should play in helping to uphold social justice all over the world – after all, evaluation is at the heart of the debate about ‘truth’ and ‘evidence’.

Calls were made for new types of evaluation evidence, moving beyond measurement of linear pre-defined change and intervention-effect alone and using mixed-methods to help us understand emergent complex social change given the interconnected nature of the SDGs and their imperative to ‘leave no one behind’.

As I was to be on a panel on methodological innovations for building more complexity-aware evaluation, I was encouraged that at least implicitly ideas of ‘complexity’ and ‘universalism’ were framing this conversation.

Learning to adapt

As the programme unfolded, and, at times, unravelled completely, it became apparent that the ambition of managing 18 parallel sessions with 500+ participants and donors using their obvious and accepted power to claim the best spots for side meetings, was quickly tipping the system over the edge and into chaos.

On the opening day, the dedicated staff of the secretariat were dealing with a constant flow of session leaders asking for small miracles to turn their plans in to reality. They were both accommodating, which likely contributed to the chaos, and responsive, which as it turns out is an important skill for navigating such complex dynamics.

As we looked at the original programme that no doubt had taken huge amounts of effort to produce, we realised that it was no longer useful. A bit like when a programme implementation team looks at the promise made to a donor in a log-frame and realises that reality doesn’t fit easily into neatly organised boxes.

We were going to have to learn to adapt, be agile and improvise. The organisers had developed an online app to help us all manage, in real time, an evolving programme – a metaphor for the explosion of digital technology now proposed as the solution for real time data to fuel adaptive development.

Yet, one of the necessary conditions for this innovative methodology to work – internet access for the likely over 1,000 devices of all those attending the event – was sadly not up to the task. We were going to have to learn and adapt in analogue fashion.

My fellow panel members rose to the adaptation challenge, and with a now more suitable room agreed for the following morning, thanks to the agile organising team, we were able to put a contextualised intervention strategy in place. We made flyers advertising our session as rescheduled and direct sales pitches to fellow participants claiming it was the best show in town. The result was success – we had more people than chairs. I am sure that if things had run to schedule we would not have ended up with so many in our conversation – this emergent reality was looking promising for us!

We shared four cases:

  • Adinda Van Hemelrijck (independent consultant): PIALA – a participatory mixed-methods evaluation approach developed with IFAD for assessing systemic impact at a medium to large scale in contexts where classic counterfactuals or controls don’t work (well). See PIALA publications on IFAD’s and Oxfam’s websites.
  • Steff Deprez (independent consultant): SenseMaker – a methodology for collecting and analysing large amounts of self-signified stories (or micro-narratives) for monitoring, learning and decision-making to guide interventions in complex contexts. See this case study and this framework using SenseMaker for assessing the inclusion of smallholders in value chains.
  • Marina Apgar (IDS): Reflexive use of Theory of Change through Participatory Action Research – a complexity-aware PM&E approach focused on systemic learning. See a recent paper in Journal of Action Research on the approach.
  • Stefano D’Errico (IIED): Combined use of process tracing and Bayesian updating – an approach for reconstructing and assessing the systemic impact pathways and assessing confidence in light of new evidence where classic counterfactuals or controls do not exist. See this recent publication on the approach and a paper on how it generates better evidence for sustainable development.

The discussion focused on core issues around different views of validity and bias in these different approaches, around how well the tools and methods work in different contexts, around design choices and some of the ongoing challenges with using co-design principles and institutional barriers to learning. Our leader, Adinda, summarized the principles that emerged from across our cases for evaluation in complex settings:

  1. Think and evaluate systemically: assessing impact of multiple interactions (instead of only intervention-effect)
  2. Zoom in and out: inquiring into the embedded systems that influence impact trajectories
  3. Spotting the gorillas in the room: sensing unintended changes and dynamics (instead of only looking at predefined outcomes)
  4. Being accountable to learn: embracing unpredictability and dealing with emergence to learn the path to impact
  5. Include all different perspectives: understanding complex systemic change from different vantage points
  6. Enable meaningful engagement: moving beyond data extraction and enabling stakeholders to meaningfully engage

From sampling to principles-focused evaluation

Alongside my own session, I participated in discussions that zoomed in on detailed methods but ended up in very long sampling debates leaving me with a splitting headache. I was involved in other discussions shaped by grand measurement rhetoric which completely ignored the elephants in the room such as the obvious inability of the thousands of quantitative indicators to tell us anything meaningful about the social justice goals we aspire to, that left me frustrated.

There were also some not entirely novel but definitely inspiring sessions, such as Michael Quinn Patton on his new book on principles-focused evaluation and listening to young emerging African evaluators ask brave questions of the big (usually northern) gurus in the room while government officials kept pushing us all on usability of evaluation findings.

Meaningful interactions between conference participants

But perhaps the richest moments came as a consequence of being thrown together in a complex system.

A woman shared her own process of necessary improvisation when her poster suddenly disappeared in the middle of the re-scheduled poster session, right before the last bit of daylight left the room, so she was left standing in the dark with an empty board with five people eager to hear her explain her experience of evaluation in the slums of Nairobi.

A conference like this taking place in London would run like a well-oiled machine, yet, participants would probably not interact with even half the people that I had the pleasure of bumping in to at AfrEA, and as we were all muddling through these random interactions together, it seemed, somehow, more meaningful.

So, in spite of much of the substance of the sessions, in my very biased opinion, failing to come close to the ambitious call for new, fresh conversations on evidence and learning to help us navigate towards the SDGs, I left AfrEA feeling encouraged that the evaluation community can rise to the occasion and navigate complexity with incredible finesse, establishing order out of chaos and allowing meaningful collective spaces for sharing and debate (our desired impact) to emerge.

If only we all believed in experiential learning, we might have nudged ourselves a tiny bit closer to profoundly shifting our way of thinking about and valuing development impact!

Disclaimer
The views expressed in this opinion piece are those of the author/s and do not necessarily reflect the views or policies of IDS.

Related content