Compounding complications: why aid is crap and RCTs won’t save itPosted: February 23, 2011
Apologies for the lack of posts recently. I’ve been suffering three handicaps:
- I’ve just relocated from Oxford to Nairobi for a bit. While my job hasn’t perhaps changed in a discernable way to anyone outside my organisation, changed it has. There’s a lot of induction going on. There’s a lot of strategizing to do. There’s a lot of catching up going on. There are life-saving workshops to get amongst.
- Sadly, my better half is far away. Fortunately, the ODI world cup just got underway. I’m having the rather rare pleasure of (a) actually having a TV and (b) having some cracking sport upon it, at hours of an evening to watch it. Note for philistines: ODI is one-day international cricket. Did I mention you were philistines?
- The Super Rugby just kicked off. Like point 2 above, but squared. Sweet merciful mother of the supposed saviour in heaven, it is good to watch real rugby instead of that northern hemisphere Second Division (also known as the “Six Nations”).
Anyway, down to business. Let’s talk about why aid is often a bit shit. And by shit, I mean: why doesn’t the aid that we do, even if provided on a sound base of evidence for what works, then have a real impact?
To answer that question I’m going to shamelessly nick a great insight from a post by Ben Goldacre, a doctor, science journalist, and pungent scourge of homoeopaths and other feeble-minded quacks. You should buy and read his book. Goldacre points to an article in the British Medical Journal that breaks down why the practice of evidence-based medicine can often fail to deliver real results to patients.
There’s enough similarity between that process and ours (providing evidence-based aid interventions to people in need) that I’ve transplanted the context of the process analysis to ours. You can quibble over the details, but what it boils down to is that for evidence-based aid to happen, and to have a sniff at making an impact, there’s a whole chain of dependent events that need to occur.
- The people designing an aid programme have to be aware that the evidence relevant to the decision they’re making exists.
- They have to accept it, and not ignore it because of prejudice, experience, marketing priorities, personal or organisational inflexibility…
- The evidence has to really be applicable to the context they’re in.
- Funding for the intervention the evidence suggests to be available. (“We really like that you’ve referenced some RCTs and taken an evidence-based approach… we definitely support that, but we’re also really looking for something new and innovative….”)
- After all this comes together, the practitioner has to write the proposal, to invest in making the suggestion, and god knows how previously clear and logical ideas can get mangled in the mill of a donor’s format. Which you then have to deliver to the letter.
- The people involved – partners, communities, beneficiaries, local governments — might not agree. People have all kinds of reasons – reasonable and otherwise – for declining to support your evidence-based intervention on the terms that the evidence indicates.
- Lastly, even if they agree to your face, the participants and communities might not adhere. They might take that microfinance loan for income generation and use it to pay for their mother’s funeral, their son’s school fees, their cousin’s wedding…
There are lots of things in the chain that have to happen right in turn, and each have a probability of going wrong. Goldacre makes the estimation that even in the much more highly controlled situation of providing a medical intervention to one patient:
If we assume, fairly generously, that you’ll be 80% successful at each step in this chain – which really is pretty generous – then with 7 steps, you’ll only manage to follow the evidence in practice 21% of the time (0.8 * 0.8* 0.8* 0.8* 0.8* 0.8* 0.8=0.21). In some ways this is a gimmick, but I quite like it, because it shows how even marginal underperformance in each domain is combinatorially explosive, overall, and this means you have to be better at everything.
Emphasis added. So in the aid context, look at all those things I listed above. Tell me with a straight face that we get them all right 80% of the time. Because even if we are getting them all right about 80% of the time, then barely 21% of the work that people are putting in to designing and delivering aid programmes is actually turning out right. Is actually within a chance of delivering a real impact for the people and communities we serve. Is delivering in practice in accordance with the evidence. And that even assumes you have some serious evidence behind what you’re doing in the first place.
Good intentions are not enough? Hell. RCTs are not enough.
Now J gets this. His latest post, which frankly scoops this one that I’ve been meaning to get around to writing for weeks while either strategizing away or watching the cricket, essentially says that we’re distracted and all the frippery and complexity just lowers the chances of getting it done right. J says aid is at it heart, about just 3 steps in a chain:
- Ask people what they need –>Listen to the answer.
- Understand the issue or the problem –> Use basic logic to come up with the most reasonable response.
- If the need is X –> Provide X (not Y)
Applying the compounding chances, if we still only got each of those 80% right, we’d still be doing good aid 0.8 * 0.8 * 0.8 = 51% of the time. Before you think that sounds terrible, it’s having an impact more than twice as frequently as the 21% above!
The same rule of simplicity applies to all the coca-cola kowtowing going on at a certain recent conference in New York – for whom success is defined in pretty simple terms:
- Make people want more coca-cola
- Make lots of coca-cola
- Sell it to them cost-effectively
Fewer steps, much more simplistic goals… everything else aside, a much greater probability of success!
Now all these numbers are purely indicative only. And you could argue about the varying level of process analysis in the above. Some of the steps in the chain vary wildly in the rate at which we do well at them. But the fundamental insight stands: the more complicated we make the system of aid, no matter how good the evidence, the more likely we are to get it wrong. The more steps in the chain we create, the more likely we are that the aid we try to do is going to be rubbish.
What’s the answer?
If we want aid to be better, if we want to have a better change to deliver for the people we serve (oh – and our donors too), RCTs and an evidence base aren’t nearly enough to save us. The shortest way to improve performance is to take some complexity out – cut down the length of the chain. For example, maybe you can justify a whole extra stuff and nonsense process about building in a GIK supply chain on its own terms. But if there’s any risk of time wasting or failure in that step, it lowers your whole likelihood of success.
And with everything else that remains, we have to get much, much better at the other steps – many of which come under the umbrella of project execution. The science and art of that often all-too-overlooked step called implementation. Not just strategy, not just grandiose groupthink, not just context analysis. Doing the right thing, and doing it right.
And on that note, I’m back into the cricket.