Evaluation and Budgeting – a UK Perspective

from Michael Kell, Chief Economist at the UK National Audit Office

The PFM Results blog recently drew attention to the need to make better use of evaluation in budgeting, as discussed in Marc Robinson’s recent paper Connecting Evaluation and Budgeting. A UK National Audit Office report on Evaluation in Government – published December 2013 – addresses precisely this issue. The report examines the coverage of evaluation evidence in the UK government, how good it is, how it is used, how it is produced and at what cost. It aims to quantify as much as possible the state of evaluation in UK government.

The UK Government spends more than £700 billion a year. Every pound that it spends, every tax and regulation introduced, provide an opportunity to gather and analyse information on the impact of these initiatives and how cost-effective they are.

Why does government often miss the opportunity to use evaluation to shape spending allocations, policy making and operational delivery?

Is it because Departments don’t realise that evaluation is important? The UK Government does know how important evaluation is. Various pieces of guidance to departments – including Managing Public Money, The Magenta Book, and The Green Book explain how important evaluation is and how it can be done well. But it is not enough to simply know that this guidance exists – we found evidence that not all departments follow the guidance.

Is it because there are gaps in evaluation evidence? The UK Government doesn’t publish a comprehensive overview of the evaluation evidence that does exist so it is far from straightforward to find out what government knows about the cost-effectiveness of its activity. We weren’t able to map all evaluation evidence but we did find evidence of significant gaps. Departmental Chief Analysts say there are gaps too, and there are limited plans to evaluate major projects.

Is it because evaluations are poor quality? When departments do take the opportunity to evaluate policies and programmes, the evaluation reports are frequently poor quality and don’t provide a robust analysis of impact or cost-effectiveness. Even four of the 15 departmental chief analysts we surveyed agreed that their evaluation evidence was quite poor. We found that less than half of the 34 evaluations we looked at in detail had provided robust evidence on policy impact – meaning that the findings could not be relied on. We also found, more disturbingly, that evaluations that were least robust often made bold, uncaveated claims about the positive impacts of the policy examined.

Do departments use the evidence they have? Departments don’t make much use of the evaluation evidence they have – there were only a handful of examples where departments were able to say that the evidence had made any different to policy. They don’t tend to use evaluation evidence to inform Impact Assessments – around 15 per cent referred to evaluation evidence. This was also true of the spending review documentation given to Treasury by departments – a limited proportion of the resources sought from Treasury in Spending Review 2010 included reference to evaluation evidence.

Can this report make a difference where others have failed? We hope so. We think our recommendations will improve the coverage of evaluation evidence, the quality of it, and how it is used.

The NAO report’s main authors are Michael Kell and Phil Bradburn. Both have experience of working in a range of government departments on evaluations before joining the NAO. 

Comments are closed.