Skip to Content


Home » Research Projects » Are Composite Measures a Robust Reflection of Performance?

How ratings add up

Are composite measures an appropriate tool for measuring performance?

Current government policy in England emphasises the creation and publication of composite performance measures which are used widely in health, social care, education, the environment and other public sector areas. These composite measures are made up of many individual indicators of performance which are added together to form one summary assessment of performance. As such, composite measures integrate a large amount of information in a format that is easily understood.

However, many questions surround the validity of composite measures for evaluating performance in the public sector. Although apparently simple, the process of creating the composite from a wealth of disparate performance data is very complex and involves a series of judgements at each stage of construction. The methods adopted and the judgements made can have a profound impact on the results.
This study will assess whether composite measures do indeed provide a robust and stable summary of public sector performance. Do they reflect accurately the performance of organisations? How much of the variation in performance indicators is due to random statistical variation?

What the research means for policy makers and the wider community

Research methods

Researchers will analyse data relating to health care (Star Ratings of NHS acute Trusts which rate performance on a scale from 0 to three stars) and local government (Comprehensive Performance Assessment or CPA which rates performance over seven services – social services, environment, housing, education, benefits, libraries and other resources). Researchers will explore how Star Ratings and CPA are constructed and explore how altering the methods and judgements used in construction impacts on the results.

Further Information: Project Posters

Updated Project Poster 2009

Below is a summary of this project’s provisional findings. It was originally presented as a dissemination poster, which is available here as a pdf document. All figures can be found at the bottom of this poster summary as thumbnails, which one should click to view full-size images. Alternatively, where figures are reffered to in the text, click the linked text for a full-size version.

 

HOW DO RATINGS ADD UP?

Background

A proliferation in ranking the performance of public sector organisations, particularly in England, has led to an explosion of local government, school and health service league tables, as well as international league tables. Even when it is not directly linked to funding, league table position can have major implications, for example; wholesale changes in leadership. But what we do not know is how far rankings based on composite indices (constructed by adding a range of different performance indicators together) reflect random variation, measurement error or real differences in performance, and how far ranking scores are sensitive to small changes in aggregation method.

Aims

We aimed to test the robustness of rankings created from composite performance measures by investigating the performance indicators that go together to form a composite measure, to discover:

» how far random variation in measuring the underlying performance indicators affects the composite score;

» how much uncertainty surrounds the composite indicator;

» how far changes in weightings of the various performance indicators that are added together to form the composite score affect the relative positions of the organisations being ranked.

What We Did

To assess the extent of uncertainty in performance indicators making up a composite, we used the ‘Monte Carlo’ method, involving 1000 repetitive sampling operations for each performance indicator. For the main available composite measures in England, the CPA and star ratings, we produced a scaled-down version of these composites. We produced a composite for 117 NHS hospital Trusts consisting of 10 indicators from the star ratings and a composite for 97 local authorities drawing on 35 indicators from the CPA. Then we could test these composite scores for their sensitivity to random variation, uncertainty and alternative aggregation rules, including changes in weightings.

Provisional Findings

» We found changes in aggregation methods (either altering weightings or decision rules) could have a substantial impact on results, with individual hospitals jumping from a 0-star rating to a 3-star rating dependent on small alterations in the aggregation rules (see Figure 3).

» Our methods indicate how uncertainty shrinks if we take account of  random variation on performance indicators (Figures 1 and 2).

» Accordingly, if composite performance measures remain popular it is important that they are published with indications of uncertainty.

Figures

Click on the figures to enlarge

jacobsfig1.jpg              jacobsfig2.jpg              jacobsfig3.jpg

Other Outputs and Related Webpages

Project page on the ESRC Society Today website

Goddard M, Jacobs R. Using Composite Indicators to Measure Performance in Health Care In: Smith PC, Mossialos E, Leatherman S, Papanicolas I, editors. Performance Measurement for Health System Improvement: experiences, challenges and prospects. Cambridge:Cambridge University Press;2009.Chapter 3.4.  

April 2007: How Do Performance Indicators Add Up? An Examination of Composite Indicators in Public Services, Public Money and Management, Volume 27 Issue 2.

June 2006: Are Composite Measures a Robust Reflection of Performance, Centre for Health Economics Research Paper 16

A Centre for Health Economics policy discussion briefing (pdf) detailing this project’s findings.

Research Team

Rowena Jacobs

Rowena Jacobs

Rowena Jacobs is a Research Fellow at the Centre for Health Economics at the University of York and has been working in the Department of Health funded Health Policy Research programme since 1999. She has a PhD in Economics from the University of York. Her research focuses on performance measurement in health care, and the associated methodological, analytical and policy issues. She has acted as consultant to various national and international agencies, including the World Bank and WHO.

Email: rj3@york.ac.uk

Maria Goddard

Maria Goddard

Maria Goddard has been Assistant Director of the Centre for Health Economics at the University of York since 1999. She leads the Health Policy research team and her main interests are performance measurement, regulation and contracting in health care systems. She also worked for three years as an Economic Adviser in the Department of Health.

Email: mg23@york.ac.uk

Peter Smith

Peter Smith

Peter C. Smith is Professor of Economics in the Centre for Health Economics at the University of York. His research interests include public finance, health care finance and public service regulation. He has advised numerous national and international agencies, including the OECD, the WHO and government ministries, and is a commissioner at the Audit Commission.

Email: pcs1@york.ac.uk