Article Text

Download PDFPDF

Rethinking standardised infection rates and risk adjustment in the COVID-19 era
  1. Hojjat Salmasian1,2,
  2. Jennifer Beloff1,
  3. Andrew Resnick1,
  4. Chanu Rhee1,3,4,
  5. Meghan A Baker1,3,4,
  6. Michael Klompas1,3,4,
  7. Marc P Pimentel1,5
  1. 1 Department of Quality and Safety, Brigham and Women's Hospital, Boston, Massachusetts, USA
  2. 2 Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
  3. 3 Department of Population Medicine, Harvard Pilgrim Health Care Institute, Boston, Massachusetts, USA
  4. 4 Infectious Diseases Division, Brigham and Women's Hospital, Boston, Massachusetts, USA
  5. 5 Division of Anesthesiology, Brigham and Women's Hospital, Boston, Massachusetts, USA
  1. Correspondence to Dr Hojjat Salmasian, Department of Quality and Safety, Brigham and Women's Hospital, Boston, MA 02115, USA; hsalmasian{at}BWH.HARVARD.EDU

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The COVID-19 pandemic has resulted in drastic changes in hospitals’ practices and their case mix. This has a direct impact on the framework used for reporting and risk-adjusting healthcare-associated infections (HAI), such as catheter-associated urinary tract infections (CAUTI) and central line-associated bloodstream infections. Metrics related to HAIs are incorporated into public and private ranking programmes for hospitals, including the Hospital Compare programme by the Center for Medicare and Medicaid Services (CMS), the Leapfrog Hospital Safety Grade, and Vizient’s Quality and Accountability Study. While the importance of preventing these infections is widely recognised, appropriately risk-adjusting HAI rates continue to be a challenge and source of debate.1 2 We believe that the changes wrought in hospitals by the COVID-19 pandemic provide a fresh opportunity to rethink our frameworks for measuring and benchmarking HAIs.

Measuring the unadjusted incidence of HAIs is not equitable, as the risk of infection varies widely between hospitals depending on the patients they serve and the services they offer—healthcare facilities that care for older or more complex patients are likely to have higher rates of HAIs compared with facilities that serve younger patients with less complex disorders, even if they implement the same rigorous infection control measures. To help make fair comparisons, the Centers for Disease Control and Prevention (CDC) calculates a standardised infection rate (SIR) for each facility based on the presumed characteristics of their patients and the type of services being provided. To this end, hospitals submit their observed HAI cases, as well as data on the population at risk (ie, the denominator) for risk adjustment, via the National Healthcare Safety Network (NHSN). An elaborate framework is used for risk adjustment of each HAI. For example, CAUTI risk adjustment is based on the number of catheter-days in each type of hospital unit (medical intensive care, medical/surgical wards, and so on), number of hospital beds, academic teaching status and special facility types (ie, children’s hospital, military, Veterans Affairs, and so on). Using these denominator data, the CDC calculates an expected number of infections for each hospital. The SIR is then calculated as the number of observed infections divided by the number of expected infections. The CDC periodically measures baseline rates of HAIs and denominator characteristics to rebalance its models so that an SIR of 1.0 represents the ‘average’ facility in the nation. The most recent rebaselining occurred in 2015 and included changes to reflect ongoing decreases in HAI rates.3

However, not all risk adjustment is created equal. There are a wide array of quality and safety measures that incorporate risk adjustment and their approaches vary considerably. Examples include the Agency for Healthcare Research and Quality’s (AHRQ) Quality Indicators such as Patient Safety Indicators (PSIs), the American College of Surgeons’ National Surgical Quality Improvement Program, and the Society of Thoracic Surgeons Performance Measures. Some of these programmes use encounter-level data for risk adjustment while others only use unit-level and hospital-level data for risk adjustment.

AHRQ measures, for instance, are calculated using claims data submitted to the CMS, which incorporate individual patients’ encounter-specific clinical characteristics including their comorbidities, admission source and other data points. For example, the risk adjustment for AHRQ’s PSI-12 (perioperative pulmonary embolism or deep venous thrombosis) includes data on more than tens of comorbid conditions and several hundred types of surgeries for every patient.

By comparison, risk adjustment for CAUTI is primarily based on the aggregated number of catheter-days in each type of hospital ward—a far cry from assessing risk in individual patients. The use of catheter-days for risk adjustment is questionable at best. Reduction of catheter-days can decrease the risk of infection but paradoxically increase the SIR by shrinking the denominator. Incorporating patient-level variables into the calculation might help overcome this limitation.

AHRQ’s use of CMS data sets a precedent for using encounter-level data, rather than facility/unit-level data, to risk adjust quality metrics. The near-universal penetration of electronic health record (EHR) systems into hospitals, however, makes it feasible to use granular clinical data for risk adjustment. This overcomes the limited coverage of CMS claims data—after all, more individuals are insured through their employer or other payors than the CMS—and will facilitate more accurate risk adjustment. Several studies have now demonstrated that risk adjustment using patient-level EHR data is superior to using claims data.4–6

The COVID-19 pandemic has forced hospitals to transform the way that healthcare is delivered (eg, by expansion of virtual care into the inpatient setting),7 8 and in so doing has exposed weaknesses in several risk adjustment models. Hospitals have had to quickly increase their surge capacity for patients with COVID-19. In our hospital, we repeatedly moved, merged and eliminated several hospital units to make beds available for intensive care units (ICU) and COVID-specific units. As the meaning of a unit changes, so does its implication in a unit-based SIR model, but while the CDC and CMS responded by making data submission for the last quarter of calendar year (CY) 2019 and the first two-quarters of CY 2020 optional, this concession will not address the fundamental problem that unit-based modelling for SIRs is a very coarse way to account for the wide variability in patients that may be present in units with similar NHSN identities.

Furthermore, several hospitals temporarily repurposed non-traditional areas into specialised COVID-19 units, for example, by turning operating rooms into ICUs,9 or by moving part or all of a hospital unit to another building to facilitate regionalisation of patients without COVID-19. Some hospitals have considered making these changes permanent. These strategic actions, along with changes to care-seeking behaviour and admission thresholds, may have a direct impact on hospital’s NHSN data submissions and thus their quality assessments, rankings and pay-for-performance incentives. The meaning of a ‘unit’ has now become a more fluid concept as patients are in hallways and units are moved, restructured, added, removed and repurposed to effectively and efficiently provide care to meet the needs and demands of the pandemic. The unit-based approach to risk adjustment of SIRs does not seem to be well suited for this ever-evolving healthcare environment. It also assumes a high degree of ‘regionalization’ of care (eg, NHSN metrics exclude oncology units but fail to account for oncology patients in non-oncology units) which is becoming less realistic as healthcare becomes ever more multidisciplinary. Hospitals will certainly want to track infections by unit as part of comprehensive quality improvement programmes. However, using patient-specific, encounter-level clinical data to risk stratify infection rates is a much more patient-centred and meaningful approach to measure hospital performance.

CDC’s response to the COVID-19 pandemic includes guidance on how to define ‘virtual units’ (temporarily or permanently) and to submit the data in ways that more closely represent actual hospital operations.10 We commend the thoughtful workarounds offered by the CDC to make NHSN data model work during the COVID-19 pandemic. However, we believe this is a unique opportunity for us to completely revisit our nation’s approach to data collection for HAIs and risk adjustment for SIRs. The unit type is only a proxy for the complexity of the patients, and the changes brought about by COVID-19 highlight the limitations of such a proxy. Consequently, CDC should consider revamping the NHSN and requiring facilities to submit coded, deidentified, clinical data about each encounter for all HAI measures (indeed, some measures reported through NHSN already include some encounter-level data, such as the surgical site infection (SSI) measures). These data can still include the unit type (or the number of days spent in each specific unit type) but should also include patient-level and encounter-level factors that can better capture the severity of the patient’s illness, the complexity of the care they were receiving and the risks associated with those, in terms of expected HAIs. We understand this will increase the complexity of surveillance, but the examples provided above (regarding SSIs and AHRQ measures) as well as the participation of many hospitals in programmes like Vizient which analyses their encounter-level data to deliver clinical and operational insights, set a precedent for collecting these data and reporting them electronically.

We anticipate that using patient-level data for risk adjustment will have several downstream effects, both nationally and locally. We expect that it will provide meaningfully different results compared with the current system; consequently, pay-for-performance programmes—such as CMS Hospital-Acquired Condition Reduction Program and Value-Based Purchasing—which focus heavily on HAIs will bring a greater level of clarity on accountability for quality of care.5 11 Moving away from a purely unit-based approach can help practitioners move away from a unit-based accountability model and towards a more holistic approach. Benchmarking between hospitals will be more meaningful, as the risk adjustment for HAIs will be tailored to a hospital’s specific patient population and the types of care delivered, and that this in turn should increase our trust in publicly reported comparison data. At the local level, improved risk adjustment will enable hospital operational teams to focus more closely on identifying true performance outliers rather than focusing on questionable conclusions from limited and inaccurate models. This will ensure hospital improvement efforts are better positioned to improve patient care rather than deliberating on process measures of questionable validity. If there is a silver lining in the COVID-19 pandemic, it might be that it will propel us to reimagine the way we do things, including improving our approach to quality measurement.

Ethics statements



  • Twitter @andyresnick, @MarcPimentelMD

  • Correction notice The article has been corrected since it was pusblished online first. The order of authorship has changed, co-author Marc P Pimentel has been placed at the end in the order.

  • Contributors HS drafted the initial version of the manuscript. All authors revised the manuscript and contributed to the final version.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.