Welcome
blue and yellow measuring tape

Non-Performing Performance Indicators

It is shocking that, in quite a few countries which embarked on performance budgeting reforms years ago, performance indicators remain awful. Performance indicators are, after all, a basic building block of any performance budgeting system. Yet OECD surveys – most recently in 2018 – have repeatedly shown that in half of member countries, available performance indicators are largely irrelevant for budgetary decision-making. My experience in many developing countries tells me that the situation there is often worse. What is the problem?

The explanation is not that it can’t be done. There are quite a few countries which have, as part of their performance budgeting systems, developed excellent suites of performance indicators. France, Australia, South Africa and New Zealand are examples.

There is an old joke about a drunken man intently searching for his lost car keys on the footpath under a street lamppost. A police officer comes along and, after helping him search for a while, asks him whether he is sure that that is where he lost his keys. The man replies that, no, he lost them somewhere on the other side of the street, but that he is searching under the lamppost because the light is so much better there.

In quite a few countries, it has been exactly like that with the development of performance indicators. Ministries chose performance indicators based on the ready availability of the data rather than the relevance of the indicators concerned. The result is that the budget documents are stuffed with indicators such as the average time taken to fill vacant staff positions, average class sizes, and the number of business licenses issued – because these are things that are typically measured and managed as a routine part of day-to-day administration. (This means, to put it in technical language, measures of inputs, activities (i.e. work processes), and output quantity (i.e. volume of services delivered).) Virtually absent are indicators which are crucially important performance budgeting purposes – namely outcome (effectiveness), quality, and efficiency indicators.

I have seen this mechanism at work in many countries which are in the early days of developing performance budgeting. When undertaking the process of selecting performance indicators for programs, the focus is entirely upon indicators that can be developed easily. Because there is little data on outcomes, quality, and efficiency, few if any of these types of indicators find their way onto paper.

To avoid this, three principles should be applied in the development of indicators for performance budgeting.

The first is that the aim should be to develop, for each program, four types of indicator – effectiveness (outcome), output quantity, quality, and efficiency.*

The second is that important types of indicator should not be omitted simply because no data is available. If, for example, there is no outcome data, the government should nevertheless define the outcome indicators which it needs to develop – be they school student literacy levels, disease incidence rates, or air quality indexes. These indicators should then be included in the list of program indicators in the budget documents with an asterisk indicating that they are under development and will be reported in future.

The third and final principle is that indicators of inputs and activities are, generally speaking, inappropriate for performance budgeting purposes. Knowing how long it takes to fill vacant staff positions is important for internal management purposes, but has no place in a performance budget.

To ensure that these principles are adhered to, the ministry of finance should set clear guidelines for indicator selection and should enforce them, including by reviewing and approving all program indicators proposed by spending ministries.

Developing performance indicators that are truly helpful for budget decision-making is something which takes time and effort. There are no shortcuts here. Using only indicators for which the data happens to be on hand simply guarantees failure.

————————————————————————————————————

* Approximately speaking. There are a few “wrinkles” to this rule – for example, that it does not apply to “administration” programs. For more detail, see my 2011 performance budgeting manual.

6 thoughts on “Non-Performing Performance Indicators

  1. Thanks Marc. I think a contributing factor has been that the requirement to provide performance indicators often is externally driven, i.e. service delivery agencies provide them because central agencies like MOF of MOP require that such indicators accompany budget submissions or that they be included in annual reports. This reflects too a lack of internal demand from the leadership and management of these service delivery agencies, who have inadequate governance, monitoring and evaluation systems in place to support their strategic planning frameworks (themselves often produced to satisfy MOF or MOP demands). It takes time to build this capacity across government agency by agency. Development Partners do not help when their budget reform or PFM reform programs include superficial and time driven disbursement indicators such as number of agencies producing strategic plans or performance budgets within a single financial year.

  2. Thanks for the article. However, I think that the problem with outcome indicators is that you have to demostrate causality (programs-outcomes) and there is a delay in the effects. I suppose You can correct this problem incorporating year t+n impact.

    1. Thanks for your comment, Anna. You raise an important issue – so important, in fact that I will try to devote a future blog piece to it when I can find time. So just some brief comments at this stage. Yes, it is true that outcomes are, in general, only partly under the control of government agencies (because of the influence of so-called external factors). And, yes, it is also true that some outcomes take significant time to realize. Despite this, we all agree that agencies need to be held accountable for the effectiveness (= outcomes achieved) of their programs and interventions. Surely we can go further and say that the effectiveness of expenditure also needs to be taken into account in making budget decisions. If, for example, a specific government program/intervention is entirely ineffective, then consideration should in many cases be given to the option of abolishing it entirely (particularly if it is not possible to make it effective through radical reform). In addition, I think we all agree that it is important, when considering agency requests for additional budget funding, to take into account how effective they are with the funding they already receive. For example, suppose the school system asks for more money, claiming it is necessary to achieve better student educational outcomes. Suppose, however, that analysis reveals that the schools achieve worse results with the funding it already receives than comparable countries with similar funding levels (e.g. much worse PISA scores), and that this can’t be justified by obvious “external factors”. In this case, it would be reasonable for the Ministry of Finance to say “no, go away and reform your curriculum, teacher training and management, and other aspects of the school education system to achieve better value for money before you come asking for extra funds from the budget”.

      Of course, there are lots of other aspects of this issue – such as the distinction between high-level and lower-level outcomes – the latter being more under control.

  3. Marc, thanks a lot for the blog, I’m trying to digest the idea of indicators promised for future delivery. Ideally a performance indicator should be measured (on a regular basis) by some impartial player like the statistical office. If an indicator is not readily available, it has to be developed and produced in the future at extra cost. Do you have any food example where this market for service among government agrncies is well developed so that any program manager can knock on the door of the CSO, ask them for advice to develop the indicator needed, subsequently to produce it on a regular basis, the manager has the right to pay the CSO out of the program’s envelope, and the CSO is constrained enough to ask for a fair price?

    1. An interesting and perceptive point, Balazs. And something which demands a fuller response that the a couple of lines I can write here. But one core issue is that of the optimal respective roles of the Central Statistical Office (CSO) and government ministries in delivering performance indicators (PI). In most countries, much of the work of PI reporting is left to the ministries themselves, with the quality-control role performed by the supreme audit institution. I feel that there are probably good reasons for leaving many indicators to the ministries rather than handing them to the CSO (e.g. topic-area expertise, and the capacity to piggy-back data collection on ministry administrative systems), but equally that in some cases the CSO is more appropriate. But I must confess to never having thought sysematically about exactly where the dividing line should be. In respect to the specific question you raise, my answer is “no”, I’m not aware of such an example. Possibly other readers might be able to comment?

Leave a Reply to Balazs Romhanyi Cancel reply

Your email address will not be published. Required fields are marked *

*

code