“Government Analytics” vs M&E

To what extent can digitalization give us what we need to properly measure government performance? What follows is the last in a series addressing this question, as well as a closely associated question: should we shift our focus from the monitoring and evaluation of performance to government analytics?

Although digitalization is a valuable tool for improving government performance measurement, there are – as outlined in the preceding pieces – major limits to its potential contribution. In particular, the effectiveness of government expenditure can only to a limited extent be measured using administrative data (data on activities carried out and other information routinely collected during service delivery). Outcome and output quality indicators need to draw substantially on data sources beyond administrative data, including surveys, physical sampling, testing, official statistics, client interviews, and expert quality assessments. The biggest contribution digitalization can make lies in the areas of indicators of outputs and intermediate services. Even here, however, there are significant limits to this contribution – such as for measuring efficiency*.

We are left with the conclusion that as valuable as it is the administrative data to which digitalization gives us easier access is only one of the sources of data required for good public sector performance measurement.

When you think about it, this point is obvious. So why am I making it?

My immediate inspiration comes from having just read the World Bank’s new Government Analytics Handbook. The Handbook is a mine of valuable information and analysis. Nevertheless, there are a few aspects that trouble me. By “government analytics,” the Handbook means the analysis of data from digitalized government business processes, supplemented by some use of surveys. It asserts that government analytics based on these data sources can provide public managers and other stakeholders with real-time performance dashboards that cover all dimensions of performance from inputs through activities and outputs to outcomes. This leads to the recommendation that all government agencies establish government analytics units to carry out this analysis and provide the dashboards.

I’ve got two problems with this. The first is that it is, in my view, inappropriate to discuss government performance measurement with such a narrow focus on two data sources – and with a primary emphasis on administrative data – thereby disregarding the importance of other data sources. This makes me, incidentally, unsympathetic to the idea that we should have a shift in terminology away from “performance measurement” to “government analytics.”

The other thing that worries me is the exclusive emphasis on performance measurement – monitoring – and the failure to acknowledge the crucial role of evaluation. Performance measures are valuable, but often not sufficient. The techniques by which these measures are analyzed – including formal evaluation – are extremely important. The importance of evaluation in government is a cause that the World Bank has long espoused, including in the form of advice to governments to establish “monitoring and evaluation” (M&E) systems. Organizationally, this advice translates into recommendations that government organizations have dedicated M&E units with a broad mandate – as opposed to government analytics units with a narrow focus on the analysis of administrative microdata.

The “government analytics” approach looks to me far too much like an attempt to copy private-sector business analytics into the public sector. Business analytics are fine as far as they go. But they are not enough, precisely because government is not the same as the private sector. In the first place, outcomes matter to government, whereas customer satisfaction – which is not the same thing – is essentially the only thing that matters to businesses. In the second place, many government outputs are not delivered to specific clients/customers, but to the community as a whole. These two fundamental facts make public sector performance measurement significantly different from performance measurement in the private sector. While it is useful to learn from good private-sector practice in performance measurement, simply copying private-sector approaches is not the way to go.

None of this removes my enthusiasm for exploiting digitalization to the full to help improve government performance measurement. But this needs to be part of a much broader M&E strategy.

*Thus, although administrative data will provide information on output per unit of labor under circumstances where staff are devoted exclusively to the delivery of a single type of output, when staff are involved in the delivery of multiple types of outputs, administrative data will not typically record their time allocation between those outputs. To measure output per unit of labor, it is then necessary to introduce time records – i.e. records in which staff record how much time they spend on each of the multiple outputs they help deliver.

Leave a Reply

Your email address will not be published. Required fields are marked *