Article Text

Download PDFPDF

Standardising hospitalist practice in sepsis and COPD care
  1. Steven Bergmann1,
  2. Mary Tran2,
  3. Kathryn Robison1,
  4. Christine Fanning1,
  5. Simran Sedani1,
  6. Janet Ready1,
  7. Kelly Conklin3,
  8. Diana Tamondong-Lachica2,
  9. David Paculdo2,
  10. John Peabody2,4
  1. 1 Penn Medicine Princeton Health, Plainsboro, New Jersey, USA
  2. 2 QURE Healthcare, San Francisco, California, USA
  3. 3 Premier, Charlotte, North Carolina, USA
  4. 4 School of Medicine, University of California, San Francisco, California, USA
  1. Correspondence to Dr John Peabody, University of California, San Francisco, CA 95817, USA; jpeabody{at}qurehealthcare.com

Abstract

Background Hospitalist medicine was predicated on the belief that providers dedicated to inpatient care would deliver higher quality and more cost-effective care to acutely hospitalised patients. The literature shows mixed results and has identified care variation as a culprit for suboptimal quality and cost outcomes. Using a scientifically validated engagement and measurement approach such as Clinical Performance and Value (CPV), simulated patient vignettes may provide the impetus to change provider behaviour, improve system cohesion, and improve quality and cost efficiency for hospitalists.

Methods We engaged 33 hospitalists from four disparate hospitalist groups practising at Penn Medicine Princeton Health. Over 16 months and four engagement rounds, participants cared for two patients per round (with a diagnosis of chronic obstructive pulmonary disease [COPD] and sepsis), then received feedback, followed by a group discussion. At project end, we evaluated both simulated and real-world data to measure changes in clinical practice and patient outcomes.

Results Participants significantly improved their evidence-based practice (+13.7% points, p<0.001) while simultaneously reducing their variation (−1.4% points, p=0.018), as measured by the overall CPV score. Correct primary diagnosis increased significantly for both sepsis (+19.1% points, p=0.004) and COPD (+22.7% points, p=0.001), as did adherence to the sepsis 3-hour bundle (+33.7% points, p=0.010) and correct admission levels for COPD (+26.0% points, p=0.042). These CPV changes coincided with real-world improvements in length of stay and mortality, along with a calculated $5 million in system-wide savings for both disease conditions.

Conclusion This study shows that an engagement system—using simulated patients, benchmarking and feedback to drive provider behavioural change and group cohesion, using parallel tracking of hospital data—can lead to significant improvements in patient outcomes and health system savings for hospitalists.

  • evidence-based medicine
  • quality improvement
  • hospital medicine
  • implementation science

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Healthcare systems are using a variety of strategies to improve the quality and lower the cost of healthcare. These range from culling clinical data, setting performance metrics, introducing financial incentives and employing alternative payment models. Clinical data from electronic medical records are now available in four out of five hospitals.1 Nearly a third of physicians are directly employed by a hospital or health system, either individually or in a group practice, giving systems contractual leverage over clinical performance in ways that were not possible with independent practitioners.2 Alternative payment models, increasingly used under the Merit-based Incentive Payment System, offer incentives for clinicians to enter into at-risk performance contracts with payers.3 Hospital consolidations through mergers, acquisitions, joint ventures and affiliations have more than doubled since 2011, adding to calls for standardised care.4 While these forces are mounting, they have been largely ineffective in raising quality and lowering costs.5

Hospitalists are among the fastest growing group of employed physicians, often contractually at risk and part of large multifacility practices.6 Hospitalist medicine grew out of the belief that providers dedicated to inpatient care would deliver higher quality, more efficient, safer and cost-effective care to acutely hospitalised patients. The evidence that hospitalists improve quality and lower cost of care has been mixed. While some studies show hospitalist programmes drive some significant improvements such as reducing length of stay (LOS),7 other investigations suggest these gains may be detrimental to overall outcomes and costs.8–10 More recent analyses suggest that spending on inpatient sepsis care by individual physicians varies more within facilities than between facilities and that increased inpatient spending is not associated with lower 30-day mortality or readmissions.11 As hospitalist care models become more ubiquitous and the impact hospitalists have on quality and cost of inpatient care grows, this ambiguity underscores the need for hospitalists to demonstrate they provide high-quality, lower cost care.

Standardising clinical practice around best practices and evidence-based guidelines is one way to improve quality and lower unnecessary costs. Building a group standard could also build group cohesion.12 13 Clinical variation, however, is difficult to measure,14 with inherent difficulty distinguishing between patient-level versus provider-level variation.15 Clinical data are further confounded by data unavailability in real time, lack of uniformity in different hospitals, and delivery of feedback that is hard to operationalise and which may be delayed or even irrelevant.13

Correcting clinical practice variation and improving quality of care are even more challenging.16 17 Solutions that are one-dimensional, focused on individual health utilisation rates or unsustainable are unlikely to change behaviour.18 Other challenges to reducing variation include inadequate communication among providers, overcoming heterogeneity in clinical knowledge, differences in culture and beliefs, few disincentives for undesired clinical behaviours, and lack of transparency of clinical practice among peers.19

Penn Medicine Princeton Health (PMPH) has earned both regional and national recognition for high-quality care.20 Nonetheless, PMPH recognised the opportunity to improve both quality and cost efficiency among their disparate group of hospitalists. The hospitalists at PMPH consist of four separate provider groups: Academic Teaching (oversee residents in internal medicine); Attending Directed non-teaching Service (ADS); a private community practice group; and individual, independent physicians. PMPH engaged QURE Healthcare and Premier in 2016 to administer a quality improvement study, using QURE’s Clinical Performance and Value (CPV) vignettes that ask providers to progress through interactive domains to determine the patient’s condition and define necessary treatment steps. The goal of the study was threefold: (1) to increase hospitalist adherence to evidence-based practice guidelines, (2) to raise quality and reduce variation, and (3) to reduce costs. This paper describes the results of a unique physician engagement solution designed to address all three goals. We report the results of engaging with hospitalists at PMPH for two high-volume diseases—sepsis and chronic obstructive pulmonary disease (COPD)—and document the clinical and financial improvements.

Methods

Setting

The project was done at PMPH, a healthcare system located in New Jersey. The quality improvement project, Princeton-Premier QURE Quality (P2Q2), began in September of 2016 and completed in December 2017.

Participants

The participants were practising hospitalists at PMPH organised into the four different groups identified above. Originally, 47 hospitalists voluntarily participated. Over the course of the study, 14 hospitalists did not fulfil the P2Q2 programme requirements or left the system, leaving 33 participants.

CPV simulated patient cases, known as CPV vignettes, were designed and developed by QURE Healthcare for this study. The simulated patients are online clinical cases that recreate a typical clinical encounter.21 CPVs ask the hospitalist to progress through interactive domains to determine the patient’s condition and define necessary treatment steps. In order, these domains are (1) taking the patient’s history (eg, chief symptom, comorbidities, economic status and so on), (2) performing a physical examination of the patient, (3) ordering diagnostic work-up, (4) making a diagnosis, and (5) outlining treatment and follow-up. These domains recreate an actual visit and the corresponding practice. Thus, each domain is measured with several criteria. For example, ‘making the diagnosis’ is a comprehensive construct that goes beyond simply listing the primary diagnosis. Accordingly, the diagnosis domain queries (and scores) on the patient’s primary diagnosis, the severity of illness (with relevant scoring criteria such as the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification) and the listing of the comorbidities/secondary diagnosis. CPVs have been validated against actual clinical care22 and in numerous clinical settings.23

The project team created 12 CPV cases: 6 each in COPD and sepsis (see online supplementary appendix 1 for case summaries). Completed CPVs were scored against predefined, evidence-based criteria. Care domain quality-of-care scores ranged from 0% to 100%. An overall score is then calculated based on how a provider performed across all five care domains. Higher percentage scores reflect greater alignment with evidence-based and organisational best-practice recommendations.

Supplemental material

Caring for a single CPV patient takes about 20–30 min to complete and requires the provider to make clinical decisions on the medical information they elicit. Since all providers care for the same set of cases, we are able to benchmark provider actions without needing to adjust for each provider’s case-mix. Because the cases were matched to specific clinical and financial issues PMPH wished to improve on, we could link their patient and financial metrics to changes we saw in the CPVs.

CPV measurement and feedback system

Each round of the P2Q2 study occurred approximately 4 months apart and included three components: (1) collection and measurement of clinical practice decisions using CPVs, (2) confidential, individual feedback given to providers on the care of their cases, and (3) group discussion to examine group variation and align best practices. Four rounds were completed. Within each round, each provider, working individually and at the time and place of their convenience, cared for two CPV patients (one each in sepsis and COPD). CPVs were randomly assigned to providers at the beginning of every round and no individual provider saw the same case twice. Trained physician abstractors reviewed and scored the de-identified clinician responses to quantify evidence-based and non-evidence-based recommendations. Depending on the details of the clinical response, abstraction of each case typically takes between 3 and 5 min. Each CPV was scored by two different physician abstractors and adjudicated by a third in the case of a disagreement between the first two. Based on their care decisions, each participant received a feedback report via an individual, online web portal. The report included providers’ quality-of-care scores for each of their cases, including benchmarking against their peers, and overall and domain-specific scores relative to the evidence-based literature. The customised feedback forms also made recommendations for care improvements and provided links to relevant clinical guidelines. After each round, the clinical items with low performance or highest variation across the group were reviewed in hour-long, facilitated group discussions. Providers were encouraged to discuss specific areas of care and review opportunities for group consensus and improvement. Group discussions were done in person and simultaneously streamed via WebEx.

Patient-level data

To help identify target opportunities for improvement and secure patient-level data, PMPH used Premier’s patient-level and provider-level QualityAdvisor (QA) data system. QA integrates quality, safety and financial data pulled from a hospital’s electronic medical record, claims data, charge master and other resources. The PMPH patient-level data was from October 2015 to July 2017 for LOS, cost of hospitalisation and readmission rate: the 11 months of data from October 2015 to August 2016 established a baseline rate; the data from September 2016 to June 2017 served as the poststudy determinations. QA also provided clinical information on items that had a direct impact on the financial and utilisation data being assessed, specifically the 3-hour sepsis bundle, level of care, diagnoses and prophylactic measures.

Economic data

We looked at cost and utilisation in two ways: (1) decreases in unnecessary testing and resultant savings and (2) any changes in the average cost per case for inpatients with COPD and sepsis. Testing costs came from the Centers for Medicare & Medicaid Services’ Physician Fee Schedule and median values derived from 56 regional fee schedules for laboratory tests, imaging and procedures costs.24–26 Cost per case changes were taken from PMPH observed and the expected cost per case from QA.

Analysis

Descriptive statistics for P2Q2 were used to display physician and hospital baseline characteristics. Between the first and final rounds, changes in both CPV and PMPH patient-level results were analysed using Fisher’s exact method for binary outcomes and linear regression for continuous outcomes for overall and subgroup analyses using Stata V.14.2. To determine changes in variation, we used a shift function in the ‘rogme’ package (0.1.1) for R (V.3.5.0).

Results

Baseline provider characteristics

Thirty-three PMPH-affiliated hospitalists (32 physicians and 1 nurse practitioner) from four separate practice groups voluntarily participated in the study (table 1). The average age was 39.4±8.3 years, and 74.5% were women. On average, they had 7.6±6.7 years of experience, with 58.2±23.7 patient encounters per week. We compared the participants with the 14 providers who were not eligible to participate and found no significant difference in their characteristics (p>0.05 for all).

Table 1

Participating physician characteristics

Assessment of quality improvements

At baseline, we observed high levels of variability in the quality of care delivered—both among providers and compared with the evidence-based standard (table 2). The round 1 baseline CPV quality scores averaged 62.2%±9.5%. The lowest score by domain was in diagnosis, which averaged 46.5%±20.6%, followed by treatment (50.6%±16.3%). By hospital group affiliation, the highest average scorers were among the combined community practice group and individual independent providers (64.0%), followed by Academic Teaching (63.1%) and trailed by the ADS group (60.2%), although the differences proved not to be significant in univariate regression testing (p>0.05). At baseline an average of 1.3±1.5 unnecessary tests were ordered per case, with the ADS group ordering more tests (1.7±1.8; p=0.033) at a greater cost ($229±$339; p=0.169) than any other group.

Table 2

CPV results

Over four rounds, we serially measured care decisions and provided individual-level and group-level feedback to participants on the highest priority improvement opportunities. We found overall CPV quality scores improved 13.7% (p<0.001) among all groups by the end (table 2). Quality scores also showed significant levels of improvement (p<0.05 for all) in each care domain, with increases ranging from 5.9% to 28.8%.

In a more detailed analysis of the improvements, we compared quality improvements for sepsis and COPD. We found quality improved +15.9% for COPD and +11.3% for sepsis (p<0.001 for both). For both conditions, the largest increases by care domain were in diagnosis (+28.8% in sepsis and +29.0% in COPD), with strong improvements also in the treatment domain (+12.9% in sepsis and +20.7% in COPD).

Improvements in making the primary diagnosis of sepsis (from 63.6% to 82.7%) and COPD (from 56.1% to 78.8%), a major component in the diagnosis domain, were driven by better diagnostic work-up, including greater adherence to diagnostic items in the sepsis 3-hour bundle (table 3). In cases of suspected sepsis, we saw an increase in appropriate serum lactate orders from 45.5% to 80.8% (p=0.006) and appropriate blood culture orders before antibiotic administration increase from 75.8% to 92.3% (p=0.089) in the cases of suspected sepsis. Similarly, in COPD, we saw a 22.7% increase in identifying the correct primary diagnosis. This included a substantial improvement in the appropriate identification and documentation of respiratory failure, which increased from 16.7% to 62.5% (p=0.001). In treatment, we found specific improvements in key clinical areas that are related to better patient outcomes, specifically increases in admissions to appropriate levels of care for patients with COPD (39.4% at baseline vs 65.4% after round 4; p=0.042) and increased utilisation of formulary antibiotic regimens (31.3% vs 66.7%; p=0.053).

Table 3

Specific areas of CPV change

At baseline, providers ordered 1.3±1.5 unneeded tests per case at an average cost of $169±$284 (table 2). By group, average unnecessary tests ranged from a low of 0.9 to a high of 1.7 (p=0.033) and average costs ranged from $85 to $229 (p=0.125). At the end of the study, providers ordered fewer unneeded tests (0.9±1.4) at an average cost of $99±$194 per case, with no significant difference between groups (p=0.383 for test orders and p=0.780 for test costs). We note that while there was a marginally significant difference in total spending (p=0.066), the overall variation in spending decreased from $229 to $194. Initially, the independent and community providers ordered fewer unnecessary tests and fewer necessary tests compared with the other provider groups. After multiple rounds, in all groups, orders of necessary tests increased, while unnecessary test ordering declined by 0.5 per case in Academic Teaching and ADS, remaining approximately the same in the independent and community providers.

Assessment of practice standardisation

To evaluate changes in variation, we compared the SD in overall CPV scores between baseline and final rounds. We found a statistically significant 24% relative decrease in the standard variation (p=0.038). Similarly, in the diagnostic work-up domain, variation in scores across the four groups showed a relative decrease of 30% (p=0.006), mainly due to more appropriate laboratory work-up and necessary test orders. While variation decreased in all remaining domains, these did not achieve significance (p>0.05). By disease, although we saw a directional decrease in variation in nearly every domain, only the work-up score for sepsis changed significantly, decreasing from 21.0% to 14.6% (p=0.032).

To determine if measurement and feedback reduced the differences in practice across the four provider groups, we constructed an ordered probit. At baseline we observed that non-affiliated, independent practitioners performed significantly better compared with the other three groups (p=0.034). After four rounds, we found the average CPV scores by group converged with a lowest-to-highest group difference of 3.1% (range: 74.3%–77.4%) and no statistically significant differences between the groups (p=0.508). To further examine the decreased variation among groups, we used a standard shift function, which determines changes in scores by deciles. The scatterplot reveals that the largest improvements were seen among the left-hand side of the distribution where care quality was lowest (figure 1). The lowest decile showed a 39% additional relative improvement compared with the highest decile, which itself showed a 15% relative improvement over baseline, demonstrating convergence around provider care quality across the groups.

Figure 1

Baseline versus final overall Clinical Performance and Value (CPV) score: (A) scatterplot, (B) shift in score for each decile and (C) changes in score by decile.Round 1 to Round 4 difference.

We found similar score convergence among the provider groups in all individual care domains. We performed a rank-sum test between groups in the diagnosis treatment domain and found differences between groups at baseline. We found that the directed teaching and independent groups showed significantly different scores in sepsis (p=0.027) at baseline, but by the end of the study, with the exception of the work-up, these between-group differences had disappeared (p=0.206).

Linking CPV improvements to patient-level data

We next turned to the impact on actual patient-level metrics. Using a pre–post analysis, we assessed the impact CPV-measured quality improvements, reduced variation and greater group consensus had on patients. This was done by taking all of the specific CPV changes and identifying the available, corresponding patient-level data. With CPV data, we observed a 35.2% relative improvement in listing the correct primary diagnosis for each condition by the end of the study. With PMPH patient-level data, over the same time period, the number of patients identified with a sepsis or COPD diagnosis increased 22.3%. For example, the identification of respiratory failure among patients with COPD increased in both the CPVs (16.7% vs 62.5%; p=0.001) and in the patient-level data (22.1% vs 28.0%; p=0.003). Above we noted an overall decrease in unnecessary tests, which for sepsis was 18.7%. For example, for ultrasounds in the sepsis cases, orders decreased by 13.6% (p=0.548) in the CPV data and by 13.1% (p=0.040) in the matching patient-level data.

Measuring financial impact

To calculate the economic benefits of higher quality and more standardised practice, we measured the savings from decreased unnecessary testing and three utilisation metrics: LOS, readmission rate and arithmetic cost per case. We also measured in-hospital mortality to check against unforeseen negative outcomes from improvements in the aforementioned three metrics.

At baseline, providers ordered an average of 1.3 unnecessary tests per CPV case, accounting for an estimated $169 in unnecessary spending, based on 2016 Medicare rates. After four rounds of CPV measurement and feedback, unnecessary test orders fell to 0.9 tests per CPV case with a corresponding 41% reduction in spending. Taking the average $70 cost improvement ($169–$99) and applying it to the annualised number of cases for both conditions (16 370) translates into a savings of $114 590 over the course of the study year.

Next, we looked at changes in LOS, cost per case and readmissions. The patient-level data showed that these three metrics decreased for nearly all of the patients with sepsis and COPD over the course of the intervention. These metrics were assessed using O/E (observed/expected) risk-adjusted measures provided by Premier QA. O/E compares PMPH’s patient-level performance with what would be expected at peer hospitals given the same patient mix (see column 6 of table 4). We observed decreases in LOS for both sepsis and COPD, and a decrease in readmissions for sepsis. As an example, we found that the LOS O/E for patients with sepsis was 1.23 at baseline, meaning sepsis LOS was 23% higher than expected given their patient population and case-mix (8.96 days actual vs 7.28 days expected). By the end of the study, this gap decreased to just 2% above the expected (6.92 days actual vs 6.78 days expected) (table 4).

Table 4

Patient-level outcomes

The average cost per case savings showed some of the largest decreases. We again used the O/E analysis of patient-level data and found a 20% cost reduction for sepsis and 11% for COPD. To determine the financial impacts, we first determined the net cost per case reductions before and after the CPV engagement with PMPH data. We found that the average observed cost per case at baseline for sepsis was $29 556, 1.47 times higher than expected. After the CPV interventions, we observed that the cost per case fell to $20 683, yielding per patient savings of $8873, which did not account for actual changes in O/E. Instead, we performed a more conservative analysis, which took into account changes in the expected costs over the course of the project, and found a postproject O/E of 1.17. When compared with the expected cost per case at baseline, this yields a cost per case of $23 524, which represents costs if they had the same current level of O/E efficiency at the baseline period. This more conservative approach estimates a cost savings per case of $6032. When this is multiplied over the 506 patients with sepsis seen during the baseline period, this translates into overall savings of $3 052 129. We performed a similar average cost of care calculation for the 874 baseline patients with COPD, yielding total costs per case savings of $1 744 382. In total, over the 11 months of baseline data, the changes resulted in cost savings of $4 796 491. Annualising this amount yields a total savings of $5 232 536.

Given the observed declines in LOS and cost per case, we wanted to confirm that there was no adverse impact on mortality. We found that the observed mortality rates and O/E mortality ratios declined for both sepsis and COPD, although only the observed sepsis mortality rate (from 18.9% to 13.9%) was significant (p=0.019) (table 4).

Discussion

As the need to improve quality, standardise practice and control unnecessary costs increases, adding quality of care to offset practice variation and perverse payment incentives will have to become a core component of care. As more provider groups come together, including hospitalist specialists, newer approaches to ensure quality care are needed.27 28 The foundational problem to overcome, therefore, is to find a practical and effective way to measure and improve practice.

At PMPH, we implemented a structured approach, with an explicit pre–post evaluation of clinical and cost impact, that deliberately sought to measure clinical practice variation and standardise care through serial engagement and feedback on improvement opportunities using CPVs.

At baseline, we found wide practice variation among providers in the four groups caring for the same pool of CPV patients, with group average quality scores ranging from 60.2% to 67.6%. In this project, the average performance on the CPV cases improved by 14% and the variation in practice decreased by 24%. The magnitude of the quality improvement exceeds the clinically detectable 3%–5% threshold that is needed to improve patient outcomes29 as reflected in the patient-level measures obtained in this study. Specific improvements in group performance were strongest in diagnosis and treatment, which started out as the two lowest scoring domains. The large decrease in variation among the hospitalists is associated with a dramatic lowering of costs we saw associated with the CPV improvements. The gains seen here are particularly important as these are the clinical domains we expect to have the greatest quality and economic impact.

The case simulations were a vehicle to achieve the broader goals of improving outcomes and lowering unnecessary costs (ie, providing more cost-effective care). To measure the impact on patient-level outcomes, we tracked hospital data for the same time period. Our review of the changes in the patient-level data showed broad and significant improvements in clinical practice, such as diagnostic imaging, level of care admissions, diagnostic accuracy, and utilisation and financial metrics, including LOS, cost per case and readmissions. These changes showed gains leading to fewer in-hospital deaths, more efficient use of hospital resources and over $5 million in annual savings. At an operational level, being in a high-cost region (New Jersey) underscores the urgency and the impact of these findings. At a clinical level, the decrease in mortality for these two conditions is the most compelling outcome.

One of the most important findings is the use of quality improvement to bring together the disparate hospitalist groups. In today’s healthcare market, underscored by mergers and acquisitions, disparate groups find themselves part of larger systems. From a systems’ perspective, it is impractical and inconsistent with mission fundamentals to accommodate differences in care. At best, these differences are stylistic but costly, and at worst they are prone to errors, expenses and worse outcomes. In this study, with measurement and feedback, we were able to standardise practice among the four groups, raising the performance of the lowest scoring providers even more than the highest performers and reducing the overall care differences provided by any group to be (statistically) the same. This finding, along with exposition to the groups, dispelled misconceptions around better or more qualified groups and refocused the discussion on cooperation, collaboration and better system-wide care. We believe that the clinical outcomes and cost savings, which were so significant in this study, were in large part due to the groups coming together around better practice.

The use of standardised, simulated patients to measure practice and deliver targeted feedback overcomes many challenges to clinical practice standardisation,13 including heterogeneity of patient cases, lack of opportunities to share best practices and knowledge gaps.14 CPV simulations enable providers to care for the same group of patients and generate prospective clinical practice data along with individual-level and group-level feedback. Delivering confidential, customised feedback individually gives providers an opportunity to review their performance in relation to their peers and track their improvements in care in a non-threatening manner. Data aggregation at the group level identifies specific improvement gaps across groups and creates opportunities for discussion among multiple providers that either may not have been available before or lacked common denominators from which to compare performance. The reduction in variation seen among the four separate hospitalist groups at PMPH is indicative of the impact engagement and feedback can have on practice.

There are limitations to the study, starting with limited availability of patient-level data to compare CPV-level performance metrics. While we could showcase large clinical and economic gains as outlined in this paper, the estimated clinical impact and savings are likely much greater given the limited number of quality metrics reviewed in the CPV data. The cost savings demonstrated here, although not statistically significant, do trend towards lower cost, which is encouraging. This is particularly true because the specific costs in our analysis leave out many costs associated with other aspects of each patient’s admission. Thus, the absolute improvement in cost may be larger than captured in our study. Finally, we did not have a control group to distinguish these significant improvements from secular effects, a ubiquitous challenge in real-world studies. Notwithstanding, in 2014, 73 healthcare organisations across New Jersey formed an association to improve sepsis care in the state and their data show a 10.8% relative decline in mortality, half of what was achieved at PMPH.30 Moreover, we are not aware of a structured, institution-wide effort to change clinical documentation in a way that would have impacted ‘expected’ components of the O/E ratio that occurred during the study period.

As new arrangements and affiliations among providers grow, there is an increasing need to ensure they are both clinically and financially beneficial, particularly in hospitalist care.31 Care standardisation around evidence-based guidance is one way of doing so. This study shows that a system of measurement using simulated patients, feedback and engagement to drive provider behavioural change and group cohesion, and parallel tracking of hospital data, leads to significant improvements in patient outcomes and health system savings.

References

Footnotes

  • Funding This study was funded by Premier.

  • Competing interests QURE, whose intellectual property was used to prepare the cases and collect the data, was contracted by Penn Medicine Princeton Health (formerly Princeton HealthCare System).

  • Patient consent for publication Not required.

  • Ethics approval The CPV data gathered were obtained as part of clinical quality and safety. The data were not collected for research purposes and contained no patient information. Accordingly, per the Office of Research Integrity of the US Department of Health and Human Services under the US Code of Federal Regulation, 45 CFR 46, the study was exempt from Institutional Review Board review.

  • Provenance and peer review Not commissioned; externally peer reviewed.