Article Text

Download PDFPDF
Evaluating the influence of data collector training for predictive risk of death models: an observational study
  1. Arvind Rajamani1,
  2. Stephen Huang1,
  3. Ashwin Subramaniam2,
  4. Michele Thomson3,
  5. Jinghang Luo3,
  6. Andrew Simpson3,
  7. Anthony McLean1,
  8. Anders Aneman4,
  9. Thodur Vinodh Madapusi5,
  10. Ramanathan Lakshmanan6,
  11. Gordon Flynn7,
  12. Latesh Poojara8,
  13. Jonathan Gatward9,
  14. Raju Pusapati10,
  15. Adam Howard11,
  16. Debbie Odlum3
  1. 1Department of Intensive Care Medicine, The University of Sydney Nepean Clinical School, Kingswood, New South Wales, Australia
  2. 2Department of Intensive Care Medicine, Peninsula Clinical School, Monash University, Frankston, Victoria, Australia
  3. 3Nepean Hospital, Penrith, New South Wales, Australia
  4. 4Liverpool Hospital, Liverpool, New South Wales, Australia
  5. 5Intensive Care Medicine, Maitland Hospital, Maitland, New South Wales, Australia
  6. 6Fairfield Hospital, Fairfield, New South Wales, Australia
  7. 7Prince of Wales Hospital and Community Health Services, Randwick, New South Wales, Australia
  8. 8Blacktown Hospital, Blacktown, New South Wales, Australia
  9. 9The University of Sydney Northern Clinical School, Saint Leonards, New South Wales, Australia
  10. 10Hervey Bay Hospital, Hervey Bay, Queensland, Australia
  11. 11Royal Perth Hospital, Perth, Western Australia, Australia
  1. Correspondence to Dr Arvind Rajamani, Intensive Care Medicine, The University of Sydney Nepean Clinical School, Kingswood NSW 2747, Australia; rrarvind{at}hotmail.com

Abstract

Background Severity-of-illness scoring systems are widely used for quality assurance and research. Although validated by trained data collectors, there is little data on the accuracy of real-world data collection practices.

Objective To evaluate the influence of formal data collection training on the accuracy of scoring system data in intensive care units (ICUs).

Study design and methods Quality assurance audit conducted using survey methodology principles. Between June and December 2018, an electronic document with details of three fictitious ICU patients was emailed to staff from 19 Australian ICUs who voluntarily submitted data on a web-based data entry form. Their entries were used to generate severity-of-illness scores and risks of death (RoDs) for four scoring systems. The primary outcome was the variation of severity-of-illness scores and RoDs from a reference standard.

Results 50/83 staff (60.3%) submitted data. Using Bayesian multilevel analysis, severity-of-illness scores and RoDs were found to be significantly higher for untrained staff. The mean (95% high-density interval) overestimation in RoD due to training effect for patients 1, 2 and 3, respectively, were 0.24 (0.16, 0.31), 0.19 (0.09, 0.29) and 0.24 (0.1, 0.38) respectively (Bayesian factor >300, decisive evidence). Both groups (trained and untrained) had wide coefficients of variation up to 38.1%, indicating wide variability. Untrained staff made more errors in interpreting scoring system definitions.

Interpretation In a fictitious patient dataset, data collection staff without formal training significantly overestimated the severity-of-illness scores and RoDs compared with trained staff. Both groups exhibited wide variability. Strategies to improve practice may include providing adequate training for all data collection staff, refresher training for previously trained staff and auditing the raw data submitted by individual ICUs. The results of this simulated study need revalidation on real patients.

  • quality measurement
  • healthcare quality improvement
  • critical care
View Full Text

Statistics from Altmetric.com

Footnotes

  • Contributors Not applicable.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. All data relevant to the study are included in the article or uploaded as supplementary information.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.