Username Password
Australia’s problem with Pacific aid
02:49 am GMT+12, 29/05/2020, Australia

Written by Terence Wood, Sabit Otor, Matthew Dornan
Although one of us (Terence) recently co-authored a report highlighting Australian aid transparency issues, the aid programme does deserve credit for placing some very useful information in the public domain. A good example of this can be seen in Aid Programme Performance Reports, which contain a suite of helpful information, including appraisals of the performance of individual aid projects.
Appraisals are only available for projects over a certain size and only for larger partner countries, but in making this type of data available, Australia joins a select group of donors that allow researchers the chance to study how projects are performing. Two of us (Sabit and Terence) have already conducted fruitful work with World Bank and Asian Development Bank (ADB) project data. So, when the opportunity to use Australian aid project data presented itself, we jumped at the chance to learn more.
We have just published our analysis of Australian aid project performance based on these data in an open-access paper in Asia and the Pacific Policy Studies. In the paper we use the data to analyse which types of aid projects are more likely to work and where. We also compare Australia with other donors.
Project appraisal data aren’t perfect: they’re a product of staff assigning ratings of project performance. (Various aspects of performance are appraised; in our paper we focused solely on effectiveness.) Although there are clear criteria for ratings, and checks built into the system, there is an inevitable degree of subjectivity that goes into appraising aid projects.
Fortunately, if the subjectivity is effectively random – some projects are scored more generously than others but there’s no systematic bias – it isn’t a big issue for the type of large-number analysis we conducted. And although it is possible projects are appraised too generously across the board, this isn’t a problem for our work either. Our analytical leverage comes from comparing differences between appraisal scores. If all scores are inflated equally, we can still learn from differences between different types of projects, or projects in different places.
Analysing a dataset of project appraisals still comes with challenges. But it brought a real strength. Rather than focusing in-depth on an individual aid project, or simply drawing on our own intuitions, we were able to zoom out, and look for systematic differences in the performance of Australian aid projects.
When we did this, much of what we found was interesting simply because of what we didn’t find. We found no good evidence, for example, that Australia suffers clearly different challenges to other donors. We found no good evidence that Australian aid is particularly effective in certain sectors (although humanitarian emergency projects appear more effective than long-term development projects.)
However, one clear and important finding did emerge from our analysis. This was that Australian aid projects perform less well in the Pacific. You can see this in the chart below, which plots the average Australian project appraisal score both in the Pacific and elsewhere in the world.
The finding proved to be remarkably robust. The Pacific continued to be less successful even when we controlled for project differences (sectors, project size, etc.).
It’s true that the magnitude of the difference is not massive: projects are nominally assessed on a one to six scale; the difference in the chart is less than half of an increment on that scale. However, aid program staff are clearly averse to providing very high or low scores for projects. (Almost all projects were scored four or five.) Given the diversity of aid projects in the real world, this is surely an artefact of risk aversion when appraisals are made. The Australian aid programme isn’t unique in this. We found the same clustering with other donors’ data. But a likely consequence is that the difference in project performance between the Pacific and other countries is understated. The difference in reality is probably much greater.
Australia is not unique in suffering worse project performance in the Pacific. Other researchers have found it in ADB data. Two of us (Sabit and Terence) have shown the same gap exists with ADB and World Bank loans.
The issue of the Pacific emerges in other analysis too. Under-performance in the Australian aid programme’s assessments of how well it meets country objectives in recipient countries. The practical experiences of some aid workers point to similar issues.
What does all this mean? We’re working on a new project using a large multi-donor dataset to gather insights on why projects are less effective in the Pacific.
As far as aid practice, lower project effectiveness in the Pacific shouldn’t mean less aid is given to the region. The need for aid is high, particularly in smaller countries, and in the poorest parts of Melanesia.
Rather, we think the obvious lesson is that all donors (not just Australia) need to up their focus on giving aid well in the Pacific. More needs to be learnt about context. More focus should be placed on gold standard evaluations. Despite lower levels of aid effectiveness, there is a dearth of robust impact evaluations undertaken in the Pacific. Effective aid in the Pacific requires more work. But if we truly want to be a good partner to the region, it’s the least we can do.
The authors would like to gratefully acknowledge the willingness of the Australian aid programme to make data available, provide advice on its data, and the interest it has taken in our research thus far.


News feature
There are no related media to this article.
Pacific Islands News Association
Who & What is PINA?
Member Countries
Media Freedom
PINA Convention
Communications Initiative
International News Safety Institute (INSI)
Media Helping Media