Compounding complications: why aid is crap and RCTs won’t save it

Apologies for the lack of posts recently. I’ve been suffering three handicaps:

  1. I’ve just relocated from Oxford to Nairobi for a bit. While my job hasn’t perhaps changed in a discernable way to anyone outside my organisation, changed it has. There’s a lot of induction going on. There’s a lot of strategizing to do. There’s a lot of catching up going on. There are life-saving workshops to get amongst.
  2. Sadly, my better half is far away. Fortunately, the ODI world cup just got underway. I’m having the rather rare pleasure of (a) actually having a TV and (b) having some cracking sport upon it, at hours of an evening to watch it. Note for philistines: ODI is one-day international cricket. Did I mention you were philistines?
  3. The Super Rugby just kicked off. Like point 2 above, but squared. Sweet merciful mother of the supposed saviour in heaven, it is good to watch real rugby instead of that northern hemisphere Second Division (also known as the “Six Nations”).

Anyway, down to business.  Let’s talk about why aid is often a bit shit. And by shit, I mean: why doesn’t the aid that we do, even if provided on a sound base of evidence for what works, then have a real impact?

To answer that question I’m going to shamelessly nick a great insight from a post by Ben Goldacre, a doctor, science journalist, and pungent scourge of homoeopaths and other feeble-minded quacks. You should buy and read his book. Goldacre points to an article in the British Medical Journal that breaks down why the practice of evidence-based medicine can often fail to deliver real results to patients.

There’s enough similarity between that process and ours (providing evidence-based aid interventions to people in need) that I’ve transplanted the context of the process analysis to ours. You can quibble over the details,  but what it boils down to is that for evidence-based aid to happen, and to have a sniff at making an impact, there’s a whole chain of dependent events that need to occur.

  1. The people designing an aid programme have to be aware that the evidence relevant to the decision they’re making exists.
  2. They have to accept it, and not ignore it because of prejudice, experience, marketing priorities, personal or organisational inflexibility…
  3. The evidence has to really be applicable to the context they’re in.
  4. Funding for the intervention the evidence suggests to be available. (“We really like that you’ve referenced some RCTs and taken an evidence-based approach… we definitely support that, but we’re also really looking for something new and innovative….”)
  5. After all this comes together, the practitioner has to write the proposal, to invest in making the suggestion, and god knows how previously clear and logical ideas can get mangled in the mill of a donor’s format. Which you then have to deliver to the letter.
  6. The people involved – partners, communities, beneficiaries, local governments — might not agree. People have all kinds of reasons – reasonable and otherwise – for declining to support your evidence-based intervention on the terms that the evidence indicates.
  7. Lastly, even if they agree to your face, the participants and communities might not adhere. They might take that microfinance loan for income generation and use it to pay for their mother’s funeral, their son’s school fees, their cousin’s wedding…

There are lots of things in the chain that have to happen right in turn, and each have a probability of going wrong. Goldacre makes the estimation that even in the much more highly controlled situation of providing a medical intervention to one patient:

If we assume, fairly generously, that you’ll be 80% successful at each step in this chain – which really is pretty generous – then with 7 steps, you’ll only manage to follow the evidence in practice 21% of the time (0.8 * 0.8* 0.8* 0.8* 0.8* 0.8* 0.8=0.21). In some ways this is a gimmick, but I quite like it, because it shows how even marginal underperformance in each domain is combinatorially explosive, overall, and this means you have to be better at everything.

Emphasis added. So in the aid context, look at all those things I listed above. Tell me with a straight face that we get them all right 80% of the time. Because even if we are getting them all right about 80% of the time, then barely 21% of the work that people are putting in to designing and delivering aid programmes is actually turning out right. Is actually within a chance of delivering a real impact for the people and communities we serve. Is delivering in practice in accordance with the evidence. And that even assumes you have some serious evidence behind what you’re doing in the first place.

Good intentions are not enough? Hell. RCTs are not enough.

Now J gets this. His latest post, which frankly scoops this one that I’ve been meaning to get around to writing for weeks while either strategizing away or watching the cricket, essentially says that we’re distracted and all the frippery and complexity just lowers the chances of getting it done right. J says aid is at it heart, about just 3 steps in a chain:

  • Ask people what they need –>Listen to the answer.
  • Understand the issue or the problem –> Use basic logic to come up with the most reasonable response.
  • If the need is X –> Provide X (not Y)

Applying the compounding chances, if we still only got each of those 80% right, we’d still be doing good aid 0.8 * 0.8 * 0.8 = 51% of the time. Before you think that sounds terrible, it’s having an impact more than twice as frequently as the 21% above!

The same rule of simplicity applies to all the coca-cola kowtowing going on at a certain recent conference in New York – for whom success is defined in pretty simple terms:

  • Make people want more coca-cola
  • Make lots of coca-cola
  • Sell it to them cost-effectively

Fewer steps, much more simplistic goals… everything else aside, a much greater probability of success!

Now all these numbers are purely indicative only. And you could argue about the varying level of process analysis in the above. Some of the steps in the chain vary wildly in the rate at which we do well at them. But the fundamental insight stands: the more complicated we make the system of aid, no matter how good the evidence, the more likely we are to get it wrong. The more steps in the chain we create, the more likely we are that the aid we try to do is going to be rubbish.

What’s the answer?

If we want aid to be better, if we want to have a better change to deliver for the people we serve (oh – and our donors too), RCTs and an evidence base aren’t nearly enough to save us. The shortest way to improve performance is to take some complexity out – cut down the length of the chain. For example, maybe you can justify a whole extra stuff and nonsense process about building in a GIK supply chain on its own terms. But if there’s any risk of time wasting or failure in that step, it lowers your whole likelihood of success.

And with everything else that remains, we have to get much, much better at the other steps – many of which come under the umbrella of project execution. The science and art of that often all-too-overlooked step called implementation. Not just strategy, not just grandiose groupthink, not just context analysis. Doing the right thing, and doing it right.

And on that note, I’m back into the cricket.

Advertisements

7 Comments on “Compounding complications: why aid is crap and RCTs won’t save it”

  1. Jed Emerson says:

    Generally speaking, good points all… But here is my question: while simply giving folks what they need or request may be good policy/practice for aid relief, does it necessarily follow that it is generally good practice for social policy and services?

    Sometimes what we “know” is not so. I worked once with a youth org that thought they were doing great work till a follow up analysis showed kids actually did worse following their intervention! Deeper analysis showed why, the program was modified and the outcomes improved—but if we had stuck with what we “knew” it would have continued down the wrong track.

  2. Lee says:

    What happens when “simple logic” is wrong, or can’t discriminate between competing solutions?

    • Cynan says:

      It’s a good question. I don’t mean to say that the shortest/simplest decision path is always the best. Just that there are areas of our work that at necessarily complex, but others where we aren’t ruthless enough about weeding out unnecessary complication. Or we take on a simplifying/streamlining new approach without properly consigning to history previous approaches – thereby just increasing the total overall complexity and links in the chain that needs to be managed to make aid happen. CERF and HACT are two areas that spring to mind.

  3. […] Compounding Complications: why aid is crap and RCTs won’t save it – la vidaid loca – How randomized control trials alone don’t help nonprofits make sound program decisions, there are many other factors that can get in the way. […]

  4. Ian Thorpe says:

    Kudos for coming up with the aid equivalent of Drake’s equation (the formula that determines the likelihood of discovering extraterrestrial intelligent life) and explaining why successful aid might be almost as rare, but with a probability increasing over time as new data come to light.

    That said if you take your logic to its conclusion it appears as if the underpants gnomes almost had it right all along 1. collect underpants 2. ??? 3. Profit. Take away step 2 and (even possibly step 1.) and then you are made.

    Seriously though – while starting with the beneficiary in mind and working backwards makes a lot of sense, executing your step 2. is far from obvious and probably disguises almost as many steps (or assumptions) as your RCT based model.

    • Cynan says:

      Thanks for weighing in Ian. I quite agree that this does play fairly loose with the level of analysis in the task breakdown.
      The overall thrust of it remains though, and a shorter (twitter?) version of this post might have read:

      Aid delivery is a very complex business model.
      And, it has highly ambitious goals.
      All else being equal, this is a recipe for a high likelihood of failure.

      • Ian Thorpe says:

        Well put.
        And this is a good argument for J.s approach rather than a highly “scientific” one.
        The other thing this would imply to me is that instead of making fixed long term plans we’d be better of experimenting and tinkering a bit more with our programmes as we go along to improve them based on experience rather than running expensive RCTs or impact evaluations afterwards only to find out that the programme didn’t work, or worse that we didn’t formulate the research objectives and question properly beforehand and so have no idea if they worked or not.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s