Can we randomize development evaluation?

Click to see the book
Click to see the book

Jon Lomoy, Director of the OECD Development Co-operation Directorate (DCD-DAC) presents the first of two contributions to the debate on the use of randomised control trials in development economics. Diana Coyle will reply.

In her post Is economics leaving Wonderland?, Diane Coyle justly notes that development economics is ahead of the curve when it comes to “shedding the belief, at least on the part of many economists, that a single conceptual approach will deliver a ‘silver bullet’ solution or method that can be applied everywhere.”

Yet I still think we need to ask whether academic innovations in the multidisciplinary approach to economics, particularly development economics, have had an on-the-ground impact on the practice of development co-operation.

There has been a lot of discussion in academic circles, for example, about the use of randomised control trials to improve the outcome of development co-operation. While I welcome the emphasis this implies on results – and this is why evaluation has such a valuable role to play in development – like many, I wonder about the approach.

Randomised control trials in medicine make sense to me. To find out if a medication works well, you give it to one group of patients and a placebo to another. Then you see if the group taking the medication improves significantly over the group taking the placebo. Simple.

But can we take it from there to assume, as does the OECD colleague Diana Coyle quotes, that in a few years’ time randomised control trials will “turn out to have transformed the field” of development economics? Naturally, we want development that works – we want to see obvious, visible progress in poverty reduction and development. We want to validate the investment of time, money, and mental and physical energy. And where success is elusive, we want to understand quickly why we failed and learn how to do things better.

Yet the narrow, medical approach of randomised trials can, I believe, tell us very little when we need to provide information that is useful to decision makers. When it comes to understanding whether higher-level policies and programmes are working, or whether micro-successes are adding up to real progress, we need a different kind of evaluation. For example,  randomised control trials can tell us whether or not an individual project for reintegrating ex-combatants worked well, but will give us little insight into the overall politics and dynamics of peacebuilding – and into how international partners can best help in settings of violent conflict. Will they help donors decide whether they can best support education by channelling money through national budgets or by financing local NGOs? Furthermore, given what we know about the diversity of development contexts, problems like this are multiplied when we try replicate positive experiences in different countries.

Evaluating development impact is not simple. The simple approach of randomised control trials can help to treat specific symptoms, but may leave larger questions of how to end “the ailment” unanswered.

Dr Coyle does well to point out “the crucial importance of the specific context.” I have seen providers of development assistance struggle to understand how their support makes a difference. How do they use findings from treating mosquito nets with insecticide to lower the rate of malaria worldwide, or from using mobile phones to improve the livelihoods of rural farmers to generate knowledge that will make development co-operation more effective at the global policy level?  It’s not easy – especially when thinking about ‘global public goods’ like the environment, education, gender equality, or fair trade. With so many factors to consider, a host of different evaluation methods are needed to tackle them.

So while I very much endorse Dr Coyle’s hope that development economics has left Wonderland, I also hope that academic theory is heading for the real world – aiming to turn its incredible range of knowledge into practical insights and tools that can be used to treat the ailment rather than the just the symptoms.

There is, indeed, no “silver bullet” to cure a global problem like poverty. It is great that we recognise this – as long as it doesn’t discourage us in our quest to end poverty. And it is great that we celebrate leaving Wonderland – as long as we realize that understanding all the pieces of the puzzle, and how they fit together to guide effective action, requires a much broader approach to evaluation.

Useful links

OECD work on evaluation of development programmes

Can we randomize development evaluation? A response to Jon Lomoy

Click to see the book
Click to see the book

Diane Coyle, Professor of Economics, University of Manchester and Director of Enlightenment Economics replies to Jon Lomoy’s article.

Jon Lomoy highlights a real risk in the hunt in development economics for “what works”, or in other words interventions with an identifiable and measurable impact on development outcomes. Reflecting on his comments, I agree there is a risk that economists carrying out randomised controlled trials, or field experiments, simply transfer to their new techniques their old certainties without due humility about the complexity of real situations.

In fact, the desire to demonstrate ‘impact’, along with the belief that the trials or experiments offer the tools to be able to do so, could distort assistance towards simpler interventions where cause and effect can be identified. But there is no way to be sure that these constitute the best use of resources and effort. In a context where a specific ‘impact’ has been identified, then RCTs might well offer the best way to choose between means of delivery. However, we must be careful how to generalise their results. While the conclusion that peer comparisons affect behaviour, whether that is in microcredit loan repayments in South Africa or electricity consumption in London, might have wide applicability, the incentive effects of a bag of lentils are going to be culture specific. This is an extreme example to make the point but it illustrates the need for constant sensitivity to context in using any evaluation technique, including randomised control trials.

More fundamentally, though, trials or experiments cannot answer the wider questions about which development outcomes are the most important, nor can they unpick the uncertain and complicated chains of causality and feedback in any real-world setting. Interventions with demonstrable impact might be less important in contributing to social welfare than others whose impact is hard to quantify and isolate. And the techniques themselves offer no insight into ranking priorities.

Above all, economic and social development is not a technocratic issue, but also a question of society, culture and politics. Economists alone cannot address all the problems – I wholeheartedly agree with Mr Lomoy on the need for more interdisciplinary work. Although it is a very welcome step for the discipline of economics to have embraced new empirical techniques, there is an obvious corresponding danger that economists’ tendency to hubris will simply relocate itself and end in the insistence that this is the only way to evaluate development policies.

There is a parallel danger that what should be a political or democratic debate is disguised as a technical one. Politicians like to demand answers to problems and where there is a demand, it will be met with a supply. Sometimes, it is not an answer but a decision – with appropriate accountability – that is required.

Having set out all the reasons for not becoming over-enthusiastic about the increasing use of RCT and field experiment techniques in development economics, I continue to believe they represent a huge step forward, and one that economists working in other areas of the discipline should embrace. It would be encouraging to see policymakers everywhere, not just in lower income countries, embrace the idea of trials or pilots to see “what works”. For all the need for caution, it is better than not knowing what works.

Useful links

OECD work on evaluation of development programmes

Haiti earthquake: Independent evaluations needed

 

Aid arrived, but the capacity to process it was limited

Today’s post is by Hans E. Lundgren and Megan Kennedy-Chouane of the OECD Development Co-operation Directorate

It has been called one of the worst disasters in human history. The earthquake that struck Haiti on 12 January 2010 saw destruction on an unprecedented scale.

Some 230,000 people lost their lives and 300,000 more were injured. Over 1 million people were left homeless.

In response, the international community mounted a massive humanitarian relief effort. The Red Cross, for instance, deployed the single largest country response in its 148 year history. People around the world gave millions in charitable donations and governments pledged $5.8 billion for relief and recovery.

At the peak of the emergency response, four million people received food aid and 1.7 million people were provided with material for basic shelter or tents.  Over time, 158,000 families have been relocated into sturdier transitional shelters. Today, 1.3 million people have access to potable water and one million are using 15,300 newly built latrines. Immunisation against major diseases has been provided to 1.9 million children and hundreds of thousands of children are back in school.

And yet, as the world marks the one year commemoration, many of us are disappointed with the overall result. Over 800,000 people are still living in camps and day-to-day conditions are extremely challenging for many Haitians. Journalists and experts in and outside of Haiti have criticised the United Nations, the donor community and NGOs for failing to improve conditions.

We support lively public debate about the effectiveness of development aid generally and the humanitarian response in Haiti specifically. However, while anecdotes and stories are useful for highlighting individual experiences, these discussions should also be informed by credible evidence – evidence that can be provided through independent evaluation. 

Here are just a few of the insights that evaluations of the earthquake have provided so far:

  • Humanitarian coordination: An independent Real Time Evaluation three months after the quake showed evidence of the recurrent problems of weak leadership and limited collaboration among international humanitarian organisations working in Haiti, despite recent progress in improving the efficacy of the humanitarian system.
  • The role of the government: Pre-existing governance weaknesses in Haiti were compounded by the earthquake.  International groups did not do enough to consult with local and national institutions and engage them in coordination mechanisms. Long-term development cannot be a donor-led process but must be effectively driven by a legitimate government. When formed, the new government will need to act decisively to approve projects, resolve issues around land ownership and set priorities for reconstruction and job creation. (IASC, 2010 and OXFAM, 2010)
  • A challenging urban setting: Reports from the Humanitarian Practice Network and OXFAM  show that delivering water, sanitation and other basic services in a major city presented very different challenges than those arising in rural environments (where humanitarians tend to have more experience). For instance, new solutions had to be found for providing toilet facilities for the hundreds of thousands of people camping amid the rubble or in dense tent cities. Organisations must have the capacity to innovate and work flexibly with local communities to find technical solutions suitable for the physical, social and cultural circumstances of the disaster-affected population.
  • Making the right kinds of donations: The Haiti response operation received tonnes of relief items, but the capacity to process these goods and get them quickly to people in need was limited. This lead to high storage costs, waste and the clogging-up of airports and roads. Some items sent were not appropriate, including expired medication that had to be destroyed. (IASC, 2010) Only goods for which there is a clearly expressed demand, and established means for distribution, should be sent.  (Read more about how best to help.)

These evaluations can be found on the ALNAP Haiti Learning and Accountability Portal. Another source for independent evaluations of development aid is the Development Evaluation Resource Centre (DEReC), hosted by the OECD DAC Evaluation Network.  This is a free online collection containing over 1700 evaluations of humanitarian and development aid programmes, including assessments of past donor efforts in Haiti and reports on other disaster responses.

In the context of broader debates about the adequacy of the Haiti earthquake response, evaluators are providing concrete lessons for the future. Sadly, some of these lessons have been highlighted before (see for example this World Bank Evaluation brief or ALNAP’s earthquake lessons note). We need to focus more on creating incentives to implement lessons, in order to ensure that mistakes are not repeated (again) in future disasters.

Useful links

Read more about Aid and the Haiti Earthquake on the Development Evaluation Resource Centre (DEReC)

Find out how the Haiti Evaluation Task Force is working to encourage credible assessments of the aid response.