Skip to Content


Home » Past Events » Performance Indicators Sub-Group Meeting

This meeting was hosted by the Centre for Health Economics, University of York, on 7 July 2005.

What is performance and how do we measure it? What are performance indicators and just how sensitive are they when combined as composite measures? Who is performance measurement for and why are comparative rankings and indices so popular? Are there alternative or optimal systems of performance measurement? Can we really be sure that star-ratings or other performance frameworks do justice to the performance of the organisations they inspect?

Researchers in the ESRC Public Services Programme are using these critical questions to disentangle the complex performance measures which produce the tables and scores by which the government and many of us judge our schools, hospitals and local councils. Three of the Programme’s projects met for an intensive seminar in July 2005 reviewing the research methods they will be working with to find out if these judgements are fair, whether existing performance regimes produce reliable measures and to see if perverse incentives result.

A team from York University’s Centre for Health Economics will be unpicking composite measures and trying to ascertain the reliability of performance indicators; they will be gauging how much random variation or ‘noise’ distorts performance scores and trying to see what implicit values or choices – for instance, how much do we value a crime prevented against a crime solved? – have been built into composite measures. Other teams from Oxford/New Local Government Network (NLGN) and The National Institute of Economic and Social Research (NIESR) are widening this investigation by providing insights into the factors that make an organisation ‘good’ or ‘bad’, whether, for example, the socio-economic profile of particular areas has an effect and to explore whether we can put values on outputs and outcomes in public services to build more robust measures for the future.

All of these projects will be reporting their results for the Programme in 2006, and taken together, will have significant messages for policy makers about the strength of the basis for their judgements and rankings as well as for the practitioners caught up in the assessment processes of performance regimes. Such ideas will have a significant cross-domain interest and will help develop methodologies of performance assessment.

This meeting brought together representatives from three small grant project teams to examine methodologies and commonalities between their projects. As an opportunity for teams to share ideas and to discuss the progress made since the 14 small grant projects met in March all present felt that more inter-project contact and working will be helpful in the coming months; it is also clear that methods, techniques and data developed by individual projects – focusing upon their own study’s public service domain(s) – may also have applications across wider public services. In this sense, the meeting sought to review each project team’s research progress to date and help develop it in the future.

View the report of the meeting.
View the presentation, ‘Are composite measures a robust reflection of performance?’, by Rowena Jacobs, Maria Goddard, and Peter Smith (Centre for Health Economics, University of York).
View the presentation, ‘Metrics, targets and performance: the case of NHS trusts’, by Philip Stevens & Mary O’Mahony (NIESR).
View the presentation, ‘Correlates of success and local government’, by Iain McLean (Nuffield College, University of Oxford).
[The above documents are PDF or Powerpoint.]