top of page

Questions and Answers

Answers to some common questions ODA Reform are asked. 

  • What do you want to achieve through this campaign?
    What we want is a robust statistical methodology that produces ODA statistics that reflect true donor effort (i.e. budgetary cost to the donor). History has shown that this is not going to happen as long as the role is assigned to a group of donor country representatives who lack the political independence required for the role, and without any voice from recipient countries. Only a new governance system run by professionally independent statisticians with full representation of developing countries can restore ODA’s credibility. In the first instance, OECD Member Governments should agree to an independent statistical review of ODA rules, and a commitment to review the governance of ODA to bring it into line with the OECD’s own Council Recommendation on Good Statistical Practice.
  • Who Do You Blame for This?
    This is a structural issue rather than the fault of any individuals. As the OECD knows very well, behaviour is affected by incentives, and the members of the exclusive “donor club” of the DAC have individual and collective incentives to exaggerate their own generosity simply for reasons of international prestige. When you then combine these general incentives with the specific pressures on development agencies to hit aid targets and on finance ministries to rein in real spending, it's no surprise that we now see multiple rules that over-count ODA. There is no way that professional statisticians protected from political pressures, or even a group politically balanced between donors and recipients, would have produced the biased and increasingly absurd ODA rules that the DAC has been concocting. On the other hand, it is difficult not to assign some responsibility to the OECD’s leadership and Secretariat for not reining in the corrupt excesses of one of its policy committees. For years now, the OECD has failed to apply its own, well-codified statistical principles and has allowed the DAC to launder its dodgy statistics by trading on the organisation's reputation as a source of reliable and objective data. Such weak leadership in the organisation carries significant reputational risk that goes beyond ODA figures and the DAC.
  • By how much are the ODA statistics exaggerated?
    The magnitude of the overcounting varies widely among donors. The precise amount depends on the extent to which a donor has exploited the opportunities the DAC's rule changes have opened up for them to score excessive ODA on loans, equity investments and “private sector instruments" as opposed to giving grants, which can only score ODA at their real value. Donors' own costs of funds to make non-grant transactions also vary. But there is no doubt that some donors are scoring ODA way above their real level of aid effort. The French Government has admitted that it has been scoring more than €5 for each €1 it has been spending on its loans programme. Other estimates of the over-counting on loans have been even higher (700% to 1000%). While several areas of ODA overcounting are now blatantly obvious, any precise estimate of the total exaggeration also requires some judgment calls, based on what you are prepared to consider as the baseline of "genuine ODA". Would you only exclude the clearly bogus changes the DAC has wrought in recent years -- the over-generous scoring of loans, the double-counting of loan risk, the scoring of non-concessional "private sector instruments", the capping of negative reporting of equity sale proceeds, and so on? Or would you go back further and exclude other questionable items, such as in-donor refugee costs? Lastly, one has to consider the effects the rule changes themselves may have had on the structure of aid programmes. The DAC has repeatedly justified its changes by referring to their incentive effects. In reality, they have created multiple incentives for donors to use financial instruments that cost them little or nothing. To the extent that they have responded to these incentives, donors' aid programmes are no longer the same as they would have been if the rules had not changed. So evaluating the exaggeration is not just a matter of comparing the figures produced by new and old measurement methods; it is also a question of trying to work out how much false ODA has been created by steering donors towards activities that generate it. This last point also reminds us that, however many tens of billions are now being over-counted as ODA, the figure will only increase in future years as more and more donors change their aid programmes to respond to the perverse incentives the DAC's rule changes have created.
  • You are highly critical of the Grant Equivalent system of calculating the ODA in loans, yet surely it is an improvement over the previous cashflow model?
    The move to a system where only the grant element of a loan counts as ODA was absolutely the right thing to do. And both lowering and differentiating the discount rate compared to the previous flat 10% was also a move in the right direction. However, paradoxically, the DAC has created a situation where “two rights make a wrong”! By counting the grant element/equivalent as ODA under the new methodology, it is essential that the discount rate is close to the costs for the lender in providing the loan. Under the old system, the 10% discount rate was only relevant for clearing the hurdle of reaching the minimum grant element of 25%. It was not used for calculating the ODA being given (as the full-face value of the loan was counted, and then repayments of principal over time were counted as negative ODA). So, to some extent, it mattered much less that the discount rate was set too high. In the new system, the discount rates matter far more – as they are “baked in” for the calculation of ODA. If they are too high (which they clearly are) this has a massive impact on the ODA that is counted, allowing donors to claim they are providing huge amounts of aid…when the reality is that the donor effort is small, non-existent or even negative. And loans, which formerly counted as zero ODA once they were repaid (regardless of the softness/generosity of their terms), are now incentivised over grants for donors whose cost of funds is below the discount rates being used by the DAC.
  • Why does the OECD use different discount rates for calculating the concessionality level in tied aid and the grant equivalent in other aid loans? Aren’t they essentially the same thing?
    Yes – both systems should be measuring donor effort, in terms of the net cost to the donor’s budget. The reason why the grant elements used are so different is because the methodologies have been designed and agreed by different OECD committees from different government ministries with different objectives[1]. The aim of the OECD Governments who comprise the Participants to the Export Credit Arrangement in establishing the Helsinki Tied Aid Disciplines was to guard against governments providing small subsidies that give their own exporters a competitive advantage. Accordingly, in this area, OECD is incentivised to ensure that the discount rates used accurately reflect the real costs of borrowing for first-class borrowers. If the discount rate were significantly above these real cost levels (say at the level of 5%), it would be possible for governments to offer significant “sweeteners” to win contracts at little or no cost to the taxpayer. The DAC on the other hand, comprising the donor community, is incentivised to make their ODA contributions look as generous as possible. And the impact of very high discount rates does just that, by massively inflating the so-called “grant equivalent” above the costs of providing the loan.
bottom of page