Is efficiency analysis the poor relation in the current wave of evaluation reforms?
Governments around the world have been establishing government-wide evaluation policies; creating central units and task forces to promote evaluation; and encouraging or requiring ministries to build evaluation capability and carry out more and better evaluations. In these reforms, it is always said that the role of evaluation is to analyze both efficiency and effectiveness. In practice, however, the evaluation reform effort has concentrated overwhelmingly on improving the analysis of effectiveness. Only perfunctory attention has been given to evaluation as a tool for analyzing and improving the efficiency of government service delivery.
In mainstream public sector usage, “effectiveness” means the extent to which an output (such as a medical treatment or school education) achieves its intended outcomes1 (such as lives saved or literacy). “Efficiency” is about the cost of outputs, so that “improving efficiency means government being able to spend less to achieve the same or greater outputs, or to achieve higher outputs while spending the same amount”.2 Efficiency is not the same as “cost-effectiveness,” which is about achieving increased outcomes per dollar.
This is unsurprising given the nature of evaluation as a discipline. Although evaluation is always defined as the systematic analysis of both effectiveness and efficiency3, its focus is in fact almost entirely on the use of social science methods to analyze effectiveness. The typical evaluation methods monograph4 treats methods for evaluating effectiveness, such as impact evaluation, in great detail. But when it comes to the evaluation of efficiency it limits itself to mention of several economic evaluation methods5 (methods which in fact analyze cost-effectiveness rather than efficiency6 and which are of no value in identifying options for reducing the cost of service delivery). There is rarely any acknowledgment of efficiency analysis methods developed by accountants, management experts and others – such as various forms of cost analysis7 and business process analysis. The implicit message is that evaluators can leave the efficiency analysis to others, while firmly maintaining their focus on effectiveness.
Improving efficiency is as important for governments as improving effectiveness. It is therefore a problem if the evaluation system claims the mandate for efficiency analysis but has neither the competence nor commitment to give this mandate the attention it requires.
There are in principle two ways of responding to this problem.
One is to rein in the pretensions of evaluation by limiting the mandate of the evaluation system to the evaluation of effectiveness8 and certain related criteria. Efficiency analysis would then be left to others. After all, some advanced countries have made major progress in efficiency analysis without calling it evaluation, and have done so independently of any initiatives they may or may not have taken under the evaluation banner.
Excluding evaluation systems from the analysis of efficiency would have the advantage of clarity. But the disadvantages are obvious. Governments often want programs analyzed from both the efficiency and effectiveness perspectives, and when this is the case it is far better if the two perspectives are integrated.
The alternative approach is to develop an evaluation system which truly covers, and focuses equally on, both effectiveness and efficiency analysis. This would require that central evaluation units and task forces give as much attention to the promotion of methods of efficiency analysis as to methods of effectiveness analysis. It would mean that the organizational units charged with the evaluation function within spending ministries would have people with the right skills to undertake each form of analysis. Evaluation would need to be seen as a multi-disciplinary activity requiring not only people trained in evaluation as conventionally defined, but also people with the right skills from other disciplines including management accounting and business process design.
There are a few examples internationally, such as Chile, of government-wide evaluation systems that have made serious efforts to develop and apply at least some efficiency analysis methods. They are, however, the exceptions.
Which of these two alternative approaches makes the most sense is an issue worthy of further discussion. My feeling is that the answer varies between countries.
One final point: In my last blog piece, which focused on the evaluation of effectiveness, I stressed the importance of including within the evaluation methods toolkit practical evaluation methods as well as more complex “scientific” methods. By practical evaluation methods I mean methods that are less analytically complex, less data-intensive, lower-cost and quicker. The same point applies to efficiency analysis. The most sophisticated forms of cost analysis methods – such as output unit cost benchmarking – are (when applied in appropriate cases) very useful9. However, they are very demanding. By contrast, techniques such as business process analysis – the mapping of the processes whereby inputs are turned into outputs and analysis to identify options for streamlining those processes – are less complex, require only limited data, and do not require quite the same level of specialist skill to apply. To achieve the best results, we need to develop and promote a balanced toolkit of efficiency analysis methods.
- The evaluation literature makes a distinction between “outcomes” and “impacts,” whereas in mainstream usage the term “outcome” is used to cover both – as it is in this blog. The outcome/impact distinction is arguably artificial because there is no clear dividing line between the two. This is why, in mainstream usage, the outcome/impact distinction is replaced with a reference to lower-level and higher-level outcomes (or equivalent terms). ↩︎
- UK National Audit Office (2021), Efficiency in Government. (Note that accountants sometimes break this concept of efficiency into two components – “economy” and “efficiency.”) ↩︎
- This is made explicit in many of the definitions of evaluation (e.g. that of the 2018 US Evidence Act: “the assessment using systematic data collection and analysis of one or more programs, policies, and organizations intended to assess their effectiveness and efficiency”). But even where this isn’t explicit in the definition, effectiveness and efficiency are identified as “evaluation criteria.” ↩︎
- For examples see Rossi, Lipsey and Henry’s leading textbook Evaluation: A Systematic Approach, Anne Reveillard (ed) Policy Evaluation, the United Nations Evaluation Group’s Compendium of Evaluation Methods and the EvalCommunity webpage on “efficiency” evaluation. ↩︎
- Cost-benefit analysis and cost-effectiveness analysis (and sometimes also data envelopment analysis). ↩︎
- The evaluation literature generally fails to distiguish between cost-effectiveness and efficiency and generally uses the term “efficiency” to mean cost-effectiveness. (The influential OECD/DAC evaluation criteria quite idiosyncratically choose to define efficiency to mean both cost-effectiveness and efficiency as conventionally understood.) The divergence of the evaluation lexicon from mainstream public sector terminology is unfortunate, as it is inevitably a source of confusion. ↩︎
- In Rossi et al’s Evaluation: A Systematic Approach, “cost analysis” is mentioned briefly but in the sense only of obtaining information on how much money is spent on the program being evaluated. ↩︎
- This is in effect what INTOSAI proposes in its guidelines on Evaluation of Public Policies, which define evaluation as exclusively concerned with aspects of effectiveness (the achievement of impacts), and excludes efficiency analysis. ↩︎
- Bearing in mind the limitations on the applicability of unit output cost analysis, as discussed in my series of blogs on unit cost budgeting. ↩︎