Article Text

Download PDFPDF
Estimating misclassification error in a binary performance indicator: case study of low value care in Australian hospitals
  1. Tim Badgery-Parker1,
  2. Sallie-Anne Pearson1,2,
  3. Adam G Elshaug1,3
  1. 1 Faculty of Medicine and Health, School of Public Health, Menzies Centre for Health Policy, Charles Perkins Centre, The University of Sydney, Sydney, New South Wales, Australia
  2. 2 Centre for Big Data Research in Health, University of New South Wales, Sydney, New South Wales, Australia
  3. 3 The Brookings Institution, Washington, DC, USA
  1. Correspondence to Tim Badgery-Parker, Level 2, Charles Perkins Centre D17, The University of Sydney, Sydney, NSW 2006, Australia; tim.badgeryparker{at}sydney.edu.au

Abstract

Objective Indicators based on hospital administrative data have potential for misclassification error, especially if they rely on clinical detail that may not be well recorded in the data. We applied an approach using modified logistic regression models to assess the misclassification (false-positive and false-negative) rates of low-value care indicators.

Design and setting We applied indicators involving 19 procedures to an extract from the New South Wales Admitted Patient Data Collection (1 January 2012 to 30 June 2015) to label episodes as low value. We fit four models (no misclassification, false-positive only, false-negative only, both false-positive and false-negative) for each indicator to estimate misclassification rates and used the posterior probabilities of the models to assess which model fit best.

Results False-positive rates were low for most indicators—if the indicator labels care as low value, the care is most likely truly low value according to the relevant recommendation. False-negative rates were much higher but were poorly estimated (wide credible intervals). For most indicators, the models allowing no misclassification or allowing false-negatives but no false-positives had the highest posterior probability. The overall low-value care rate from the indicators was 12%. After adjusting for the estimated misclassification rates from the highest probability models, this increased to 35%.

Conclusion Binary performance indicators have a potential for misclassification error, especially if they depend on clinical information extracted from administrative data. Indicators should be validated by chart review, but this is resource-intensive and costly. The modelling approach presented here can be used as an initial validation step to identify and revise indicators that may have issues before continuing to a full chart review validation.

  • quality measurement
  • performance measures
  • health services research
  • healthcare quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • Twitter @AElshaug

  • Contributors TB-P conceived, planned and conducted the study, and wrote the article. AGE and S-AP provided overall supervision and assisted in drafting and revising the article. All authors read and approved the final article.

  • Funding This study was funded by National Health and Medical Research Council (1109626), NSW Ministry of Health, HCF Research Foundation, The University of Sydney.

  • Competing interests AGE holds a HCF Research Foundation Professorial Research Fellowship, and receives income as a Ministerial appointee to the (Australian) Medicare Benefits Schedule (MBS) Review Taskforce, a member of the Choosing Wisely Australia advisory group, the Choosing Wisely International Planning Committee, the ACSQHC’s Atlas of Healthcare Variation Advisory Group, a Board Member of the NSW Bureau of Health Information (BHI), and as a consultant to Private Healthcare Australia and the Queensland and Victoria state health departments. TB-P has received scholarship income from the University of Sydney and the Capital Markets Cooperative Research Centre, and consulting fees from the Capital Markets Cooperative Research Centre, Queensland Health, the Victorian Department of Health and Human Services, and Private Healthcare Australia.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data may be obtained from a third party and are not publicly available. This analysis used data from the NSW Admitted Patient Data Collection. On reasonable request, the authors will assist in preparing a data access request.