Skip to main content

What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature

Abstract

Background

Despite increasing interest and publication of risk-adjusted hospital mortality rates, the relationship with underlying quality of care remains unclear. We undertook a systematic review to ascertain the extent to which variations in risk-adjusted mortality rates were associated with differences in quality of care.

Methods

We identified studies in which risk-adjusted mortality and quality of care had been reported in more than one hospital. We adopted an iterative search strategy using three databases – Medline, HealthSTAR and CINAHL from 1966, 1975 and 1982 respectively. We identified potentially relevant studies on the basis of the title or abstract. We obtained these papers and included those which met our inclusion criteria.

Results

From an initial yield of 6,456 papers, 36 studies met the inclusion criteria. Several of these studies considered more than one process-versus-risk-adjusted mortality relationship. In total we found 51 such relationships in a widen range of clinical conditions using a variety of methods. A positive correlation between better quality of care and risk-adjusted mortality was found in under half the relationships (26/51 51%) but the remainder showed no correlation (16/51 31%) or a paradoxical correlation (9/51 18%).

Conclusion

The general notion that hospitals with higher risk-adjusted mortality have poorer quality of care is neither consistent nor reliable.

Background

The relationship between quality of care and outcome continues to attract the interests of a wide-spectrum of stakeholders including patients, carers, healthcare providers, researchers, politicians, the media and others [1]. The unit of analysis is often an acute hospital and outcome is frequently defined in terms of a risk-adjusted mortality (inhospital or 30-day). The rationale for using risk-adjusted mortality rates is that they purport to distil the contribution of patient case-mix factors and the play of chance mortality, and thereby expose a residual unexplained variation which may implicate quality of care. This leads naturally to the ranking of hospitals according to risk-adjusted mortality rates with an implied correlation with quality of care [2]. Organisations that produce performance ratings based on mortality rates include Leapfrog [3] and US News "America's Best Hospitals" [4] in the USA, and the Dr. Foster company (the "Good Hospitals" guide [5]), and the Healthcare Commission [6] which uses a "star ratings "system for National Health Service (NHS) hospitals in the UK.

We sought to examine the empirical evidence to clarify the relationship between quality of care and risk-adjusted mortality by undertaking a systematic review which asked the question: " To what extent do hospitals with higher risk-adjusted mortality rates, provide poorer quality of clinical care?"

Methods

We focused on studies which compared risk-adjusted mortality rates in two or more hospitals and related this to adherence to existing evidence-based standards of clinical care. Evidence of quality of care in our sample of studies was typically obtained from patient case-notes and/or clinical databases ("explicit review ") or expert panels which judged quality of care typically in the form of inspection reports (" implicit review ").

An earlier paper by Iezzoni [7] cited a number of studies that had attempted to answer our research question. Using her paper as a starting point, we identified key words and MEDLINE subject headings (MeSH terms) in these studies. Many contained some of the MeSH terms " process assessment", "outcome assessment", "outcome and process assessment", "quality indicators, health care" and "quality of health care". Most also included "mortality" or "hospital mortality" as a MeSH term, or in the title or abstract.

We applied our search strategy (Additional File 1) to three databases: MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature) and HealthSTAR (covering health services management literature). We imported references into Reference Manager, version 10 and removed duplicate references.

We included four other papers we were already aware of that met inclusion criteria, but which the database search had not identified. One of these [54] has subsequently been published in a peer-reviewed journal [8]. We scanned the references of all papers and review articles that we obtained, to identify any further studies which might meet the inclusion criteria.

We did not include several types of study :

  1. (1)

    Studies that primarily examined the relationship between organisational/structural factors and quality of care (e.g. technical equipment [9], nurse-patient ratio [10], physician staffing [11] or public versus private funding [12, 13]) were excluded on the grounds that the underlying evidence-base for such organisational factors is sparse. Moreover, a review of the impact of organisational factors on intensive care outcomes has recently been undertaken [14].

  2. (2)

    Studies which examined the relationship between volume and outcome, as volume is not an indicator of quality of care, but a structural indicator often associated with quality [15], and the extensive literature on the subject has been repeatedly systematically reviewed [16–18].

  3. (3)

    Studies where the aim was to discover whether a particular clinical process was effective were excluded, as we were concerned with use of existing knowledge, not the generation of new knowledge [19].

  4. (4)

    Studies that compared clinical process in one hospital with a clinically equivalent alternative in another [20–22] were also excluded.

  5. (5)

    Studies that measured quality of care and risk-adjusted mortality but presented insufficient data to enable any conclusions about the nature of the relationship to be drawn [23].

For all studies the present authors independently agreed which papers met the inclusion criteria. Where discrepancies emerged (n = 14 papers) the inclusion/exclusion of these studies was decided by consensus.

For each study we classified the nature of the relationship between quality of care and risk-adjusted mortality as being intuitive (if better care was associated with lower risk-adjusted mortality), no-correlation (if there was no correlation between quality of care and risk-adjusted mortality) and paradoxical (if better care was associated with higher risk-adjusted mortality). It is possible for studies to have more than one relationship, as some studies examined several processes or different clinical conditions.

Results

Of 6,456 papers located from database searching, initial screening identified 302 papers as meriting further attention, either because titles or abstracts appeared to meet inclusion criteria, or because papers were relevant in another way (e.g. reviews). A further five papers [54–57, 59] were located from other sources (e.g. references). On the basis of title or abstract, two of the authors independently selected 91 of these papers to appraise. After applying inclusion and exclusion criteria and agreeing where necessary by consensus, 36 studies remained. One of these was unobtainable [57], but sufficient information was provided in another source [24] for us to be confident that it met inclusion criteria.

Studies were mainly conducted in intensive care units (ICUs) [39, 72, 73], surgical departments [42–45, 57–59, 74] or within general medicine [38, 41, 44, 57, 58, 70]. Conditions most frequently investigated included acute myocardial infarction (AMI) [41, 48, 50, 53–55, 57, 58, 64–69, 71], stroke [41, 49, 51, 62], coronary artery bypass graft surgery (CABG) [45, 57, 58, 74] and Pneumocystis carinii pneumonia (PCP) [40, 60].

There was great diversity in study design, and different studies using the same approach drew conflicting conclusions. Walker [61], using a checklist to review case notes found hospitals with better adherence to processes of care had lower mortality, whereas Dubois [41] did not. Studies of the same condition failed to agree; e.g. for AMI, Keeler [70] found better care in low-mortality hospitals whereas Park [53] found better quality of care in high-mortality hospitals. Results in some studies depended upon the process, so Chen (1) [66] found lower mortality hospitals had higher (better) rates of prescribing of aspirin and β-blockers, but lower (worse) rates of thrombolysis, when compared to high mortality hospitals. Investigating the relationship between mortality and rate of quality-of-care concerns across the USA, Hartz [46] found positive or negative correlation coefficients in different States in the United States of America.

Across the 36 included studies we identified 51 distinct relationships between quality or processes of care and risk-adjusted mortality. Some studies that measured the same process in different settings or subgroups found that the relationship varied according to where it was being measured [e.g. [46]] or how the data was analysed [e.g. [71]] and in such cases we have counted the study more than once in Additional File 4.

Studies which examined the relationship between clinical quality of care variables and risk-adjusted mortality fell into two categories. In most cases (n = 25/36), the authors directly correlated process and risk-adjusted mortality across some or all of the hospitals in the study (Additional File 2) e.g. Dubois [41] undertook case note reviews of patients admitted with stroke, pneumonia and AMI in six high-mortality outlier hospitals and six low-mortality outliers to see whether there was any difference in the quality of care. In eleven studies however, the primary comparison was between hospitals of one sort and hospitals of another (e.g. teaching hospital versus non-teaching hospitals), but both clinical process variables and mortality had been measured (Additional File 3). In these cases the comparison of process and mortality is indirect, for example Gottwik [69] compared clinical processes (aspirin, reperfusion etc) in hospitals with and without cardiology departments, and found that usage was greater (better), and mortality was lower, in hospitals with cardiology departments than those without.

To accommodate the diversity of study design, we analysed the 36 studies in the following ways: (A) direct versus indirect studies; (B) studies grouped by clinical condition; (C) studies grouped by organisations/projects; and (D) studies groups by whether clinical or administrative data was used in risk adjustment (Additional File 4A, 4B and 4C).

Direct and indirect studies combined

Up to 26 studies provided evidence that better quality of care correlated with lower risk-adjusted mortality rates, which might be considered intuitive. Three of these studies however were only intuitive because of the impact of one outlying hospital, and therefore would have demonstrated no correlation between quality of care and mortality if the outlier hospital was excluded [51, 59, 62]. Sixteen studies (19 if those studies with only one outlying hospital are included) found no correlation. Nine observed a paradoxical correlation, with better quality in higher risk-adjusted mortality rate hospitals (Additional File 4A).

Studies grouped by clinical condition(s)

Similarly, to explore whether a relationship between quality of care and risk-adjusted mortality was more commonly observed for specific medical conditions, we analysed studies by condition where applicable, depending upon whether other studies had also analysed these conditions. Additional File 4B shows that approximately half of all studies that were based on a particular type of condition found some degree of positive correlation between better quality of care and lower risk-adjusted mortality; of the others, around two thirds found no correlation and a third found a paradoxical correlation.

Studies grouped by database or collection of health care units

Some of the above studies could be considered to be non-independent, in that they involve repeated study of the same database or collection of health care units. We identified three such clusters in Additional File 4C: Co-operative Cardiovascular Project studies [64–68, 71] (most of which analysed hospitals in different ways using the same clinical dataset); Health Care Financing Administration studies [46, 50, 53, 70] and Veterans Affairs [38, 43, 44, 60] hospitals. The results are not homogeneous in these clusters – we find similar spread between intuitive, paradoxical and null correlation between quality of care and risk-adjusted outcomes.

Risk-adjustment method

Despite evidence that risk-adjusted mortality is affected by how risk adjustment is undertaken [7, 25], only six studies explored the effect of applying different clinical risk adjustment methods. In three cases [50, 53, 73], effect on mortality was limited; in another, three out of four "high-mortality" hospitals were no longer outliers after accounting for procedure volume [45]. In one study [55, 56], hospitals were risk-adjusted for condition of patient on admission and separately for ethnicity, payment method and conditions diagnosed later in admission (that might be caused by poor care). The augmented model reduced the variation in mortality but was compromised by lack of coded information. Only one study, involving five hospitals, found that more extensive risk adjustment further reduced the variation in adjusted mortality rates for stroke patients although one hospital still appeared to have significantly higher risk-adjusted mortality [62].

In our review, some studies used clinical data from hospital records and some used administrative data, collected for example for re-imbursement claims. One may presume that clinical data provides more detailed information for risk adjustment. The proportion of intuitive, null and paradoxical results did not differ according to whether clinical or administrative data were used however [Additional File 4D].

Discussion

Our systematic review found that the relationship between quality of care and risk-adjusted mortality is inconsistent. Whereas about half the studies reported a positive correlation between quality of care and risk-adjusted mortality, half did not. The notion that mortality can be used to identify poor quality of care stems from a simple function which predicates mortality on three key variables – patient risk-factors (case-mix), play of chance and quality of care. The rationale is that if adequate adjustment for patient case-mix factors (hence risk-adjusted mortality) and the play of chance can be undertaken, then the residual unexplained variance in mortality must be attributable to quality of care. This is a fallacy [26] because it does not acknowledge the role of unmeasured/immeasurable factors in case-mix and how definitions are applied that might affect outcome irrespective of quality. Thus, there are three reasons why outcomes may vary even after case mix adjustment: (i) genuine differences in process measures of quality of care not measured in the study, e.g. vigilance of nursing staff which is harder to measure and therefore rarely captured in the study; (ii) differences in prognosis/risk, not captured in the study; and (iii) differences in definitions or in how definitions were applied in different places.

Furthermore, many studies are prone to Type II error because not every hospital has sufficient patients to ensure that differences in outcomes between units are statistically significant. In reality, even for common operations, only a minority of hospitals actually have sufficient caseloads for even a doubling of the mortality rate to be statistically significant [27].

These factors may explain why in our review we have not found a consistent relationship between quality of care and risk-adjusted mortality.

Nonetheless, even if risk-adjusted mortality rates are affected by quality of care, how well would they perform as a screening tool for poor quality care? A modelling exercise by Hofer [28] in which 10% of hospitals had poor quality care (25% of deaths preventable versus 5% elsewhere) found that sensitivity for detecting poor quality hospitals on the basis of high mortality rates was only 35%, and positive predictive value (PPV) 52%. Mortality for individual medical conditions proved to be an even poorer screening tool (e.g. sensitivity for pneumonia was 10% and PPV 21%, implying that detection via mortality rates would miss 90% of poor-quality hospitals, whilst four out of five hospitals with high risk-adjusted mortality rates had acceptable quality). Similar exercises by Zalkind [29] and Thomas [30] with different input parameters came to the same conclusions.

We found a variety of innovative and complex study designs have been adopted to address the review question and noted no overall consensus over the ideal study design. Further studies should not be undertaken lightly not only because of methodological challenges [26], but also because of the vast quantity of accurate data required. The cost of collecting sufficient data for a risk-adjustment system that would allow fair comparisons of outcomes and quality of care in Californian hospitals was estimated at $61 million in 1990 [31]. An inherent dilemma is that studies which are sufficiently large to detect a significant difference in quality or mortality tend to rely upon administrative databases for clinical data (both for risk-adjusting mortality and measuring quality of care). This is much easier to obtain but may be less reliable than data obtained from manually searching medical records [32, 33].

There are several limitations to our review :

We relied upon three medical databases to identify relevant studies and only cited grey literature when either indexed in the databases or referenced in existing studies. The studies that demonstrate a relationship between better quality of care and lower mortality are more likely to be published is essentially un-testable, though it is clear that studies that demonstrate the opposite exist.

Several papers described different aspects of the same study [57, 58], or analysed the same data in different ways [66–68] or over different time periods [42, 59], meaning that studies were not always independent, although our stratified analyses attempted to control for this.

Unlike reviews of randomised controlled trials, for which comprehensive checklists have been developed to appraise the quality of individual studies [34], no such criteria exists for assessment of quality of care studies. In appraising each study, we had to decide how rigorously it was conducted and how valid its conclusions were. For example, in studies where independent examiners inspected quality of care in high- and low-risk-adjusted mortality hospitals, it was important to find out whether examiners were blind to whether they were visiting a high- or low-risk-adjusted mortality hospital. Where papers discussed previous studies, we noted any comments about their perceived limitations. Where papers initiated correspondence, we looked for letters and authors responses.

Calculated mortality rates vary depending upon the level of detail in the risk adjustment method. Although several studies acknowledged this point, only six studies [45, 50, 53, 55, 56, 73] recalculated mortality rates applying different risk adjustment techniques. Indeed, one writer sardonically remarked that canny hospitals might even try to calculate their risk-adjusted mortality in different ways and only publish the most favourable result [7]. If identification of a hospital as high-mortality is somewhat arbitrarily dependent upon which method of risk adjustment is used, it is hardly surprising that evidence of a quality-mortality relationship is inconsistent, with some studies "correctly" identifying poor-quality outliers, and others missing poor-quality outliers and identifying false-positives instead. We suggest that future studies comparing risk-adjusted outcomes should include undertake a sensitivity analysis using different risk-adjustment algorithms.

Another important methodological issue is that of hindsight-bias [35]. If peer-review teams visiting hospitals are not blinded, the high mortality rate hospitals may be subject to greater scrutiny because of the case-mix adjustment fallacy [53]. Most studies involving peer reviews stated that reviewers were blind to the mortality status. This is less problematic when reviewing hospitals but much more challenging when patient case-notes are being reviewed [36].

The definition of mortality was inconsistent. Some studies used inpatient deaths whilst others used death before 30 days or more after admission or after surgery. For example, the identification of outlier hospitals might vary depending on whether all deaths within a defined time period, all hospital deaths attributable to certain conditions or all hospital deaths are counted. Only three studies measured mortality at multiple points. They all found a similar relationship between process and mortality regardless of time [53, 64, 74].

Studies which attempt to correlate adherence to processes with mortality may be susceptible to ecological fallacy. Some studies used mortality for entire hospitals, but assessed quality of care for specific groups of patients in those hospitals. This may explain why some but not all processes appear to be inversely related to mortality: a hospital could have a low overall mortality yet deliver poor care and have a high mortality rate for patients with AMI; some studies would have considered this a low rather than a high mortality hospital. The degree to which the quality criteria related to measured outcomes was subjective and therefore it was not easy to categorise studies by the degree of fit between process and the outcome that might be affected by that process. In any event, if the quality of care for one type of condition correlates with care in general, and if care in general correlates with outcome, then there should be a correlation between care for one condition and outcomes over many conditions.

Another example of susceptibility to the ecological fallacy is observed in studies where quality of care was not necessarily assessed over the same time period as mortality [46, 63]; significant changes in clinical practice could have occurred in the meantime.

We are aware of one other paper that has been published since undertaking our original search [37]. This study found despite significant correlations between risk-adjusted mortality and certain process measures, overall process measures only explained 6% of the variation in hospital mortality rates for patients with AMI.

We suggest that given the consistency of findings across a wide range of different studies that quality of care is only weakly associated with hospital mortality, further research is unlikely to add to the existing body of information. However, there is a need to develop more subtle measures of quality of care, both at the level of patient contact (e.g. vigilance of nursing observations or technical proficiency of surgeons) and at the level of the system (e.g. teamwork and human resources policies).

Conclusion

Our findings are in agreement with a previous, but not systematic, review [24] of the relationship between quality of care and risk-adjusted mortality. The authors concluded that whilst hospitals that delivered poor-quality care could have higher risk-adjusted mortality rates, hospitals with higher-than-expected risk-adjusted mortality rates did not necessarily provide poor quality care, and different risk-adjusted mortality rates in individual hospitals were not indicative of differences in quality of care.

Despite important methodological concerns, the production of risk-adjustment mortality will almost certainly continue; however logical argument and empirical evidence demonstrates that the link between quality of care and risk-adjusted mortality remains largely unreliable.

References

  1. Jacobson B, Mindell J, McKee M: Hospital mortality league tables : Question what they tell you – and how useful they are. British Medical Journal. 326 (7393): 777-778. 10.1136/bmj.326.7393.777. 2003 April 12

  2. Barts and the London NHS Trust. Trust "has second lowest mortality rate in the NHS", 28 February 2006. (accessed 5 June 2006), [http://www.bartsandthelondon.org.uk/news/story.asp?id=1141&section_id=9]

  3. The Leapfrog Group. (accessed 5 February 2006), [http://www.leapfroggroup.org/home]

  4. US News and World Report – Best Hospitals 2006. (accessed 5 February 2006), [http://www.usnews.com/usnews/health/best-hospitals/tophosp.htm]

  5. Dr. Foster Good Hospital Guide. (accessed 5 February 2006), [http://www.drfoster.co.uk/ghg/]

  6. Healthcare Commission NHS star ratings. (accessed 5 February 2006), [http://www.healthcarecommission.org.uk]

  7. Iezzoni LI: The risks of risk adjustment. Journal of the American Medical Association. 1997, 278 (19): 1600-7. 10.1001/jama.278.19.1600.

    Article  CAS  PubMed  Google Scholar 

  8. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC, Pollack CV, Newby LK, Harrington RA, Gibler WB, Ohman EM: Association between hospital process performance and outcomes among patients with acute coronary syndromes. Journal of the American Medical Association. 2006, 295: 1912-1920. 10.1001/jama.295.16.1912.

    Article  CAS  PubMed  Google Scholar 

  9. Bastos PG, Knaus WA, Zimmerman JE, Magalhaes A, Sun X, Wagner DP: The importance of technology for achieving superior outcomes from intensive care. Brazil APACHE III Study Group. Intensive Care Medicine. 1996, 22 (7): 664-669.

    Article  CAS  PubMed  Google Scholar 

  10. Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH: Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. Journal of the American Medical Association. 2002, 288: 1987-1993. 10.1001/jama.288.16.1987.

    Article  PubMed  Google Scholar 

  11. Pronovost PJ, Angus DC, Dorman T, Robinson KA, Dremsizov TT, Young TL: Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. Journal of the American Medical Association. 2002, 288 (17): 2151-62. 10.1001/jama.288.17.2151.

    Article  PubMed  Google Scholar 

  12. Hartz AJ, Krakauer H, Kuhn EM, Young M, Jacobsen SJ, Gay G, Muenz L, Katzoff M, Bailey RC, Rimm AA: Hospital characteristics and mortality rates. New England Journal of Medicine. 1989, 321 (25): 1720-5.

    Article  CAS  Google Scholar 

  13. Devereaux PJ, Schunemann HJ, Ravindran N, Bhandari M, Garg AX, Choi PT, Grant BJ, Haines T, Lacchetti C, Weaver B, Lavis JN, Cook DJ, Haslam DR, Sullivan T, Guyatt. GH: Comparison of mortality between private for-profit and private not-for-profit hemodialysis centers: a systematic review and meta-analysis. Journal of the American Medical Association. 2002, 288 (19): 2449-57. 10.1001/jama.288.19.2449.

    Article  CAS  PubMed  Google Scholar 

  14. Carmel S, Rowan K: Variation in intensive care unit outcomes: a search for the evidence on organizational factors. Current Opinion in Critical Care. 2001, 7: 284-296. 10.1097/00075198-200108000-00013.

    Article  CAS  PubMed  Google Scholar 

  15. Epstein AM: Volume and outcome – it is time to move ahead. New England Journal of Medicine. 2002, 346 (15): 1161-1163. 10.1056/NEJM200204113461512.

    Article  Google Scholar 

  16. Halm EA, Lee C, Chassin MR: Is volume related to outcome in health care? A systematic review and methodological critique of the literature. Annals of Internal Medicine. 2002, 137: 511-520.

    Article  PubMed  Google Scholar 

  17. Birkmeyer JD, Siewers AE, Finlayson EVA, Stukel TA, Lucas FL, Batista I, Gilbert Welch HG, Wennberg DE: Hospital volume and surgical mortality in the United States. New England Journal of Medicine. 346 (15): 1128-1137. 10.1056/NEJMsa012337. 11 April 2002

  18. Sowden A, Deeks J, Watt I, Sheldon T: Relationship between volume and quality of health care: a review of the literature. 1995, York: NHS Centre for Reviews and Dissemination

    Google Scholar 

  19. Horbar J, Carpenter JH, Buzas J, Soll RF, Suresh G, Bracken MB, Leviton LC, Plsek PE, Sinclair JC: Collaborative quality improvement to promote evidence based surfactant for preterm infants: a cluster randomised trial. British Medical Journal. 2004, 329: 1004-1010. 10.1136/bmj.329.7473.1004.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Karlson BW, Kalin B, Karlsson T, Svensson L, Zehlertz E, Herlitz J: Quality assurance with regard to outcome and use of medical resources for patients hospitalized with acute chest pain: a comparison between a city university hospital and a county hospital. European Journal of Emergency Medicine. 2003, 10 (1): 6-12. 10.1097/00063110-200303000-00003.

    Article  CAS  PubMed  Google Scholar 

  21. Rosenthal GE, Larimer DJ, Owens KE: Treatment of patients with acute myocardial infarction at a Veterans Affairs (VA) hospital and a non-VA hospital. Journal of General Internal Medicine. 1994, 9 (8): 455-8. 10.1007/BF02599064.

    Article  CAS  PubMed  Google Scholar 

  22. Targownik LE, Gralnek IM, Dulai GS, Spiegel BM, Oei T, Bernstein CN: Management of acute nonvariceal upper gastrointestinal hemorrhage: comparison of an American and a Canadian medical centre. Canadian Journal of Gastroenterology. 2003, 17 (8): 489-95.

    Article  PubMed  Google Scholar 

  23. Grupo colaboativo neocosur: Very-low-birthweight infant outcomes in 11 South American NICUs. Journal of Perinatology. 2002, 22: 2-7. 10.1038/sj.jp.7210591.

    Article  Google Scholar 

  24. Thomas JW, Hofer TP: Research evidence on the validity of risk-adjusted mortality rate as a measure of hospital quality of care. Medical Care Research and Review. 1998, 55 (4): 371-404. 10.1177/107755879805500401.

    Article  CAS  PubMed  Google Scholar 

  25. Iezzoni LI, Shwartz M, Ash AS, Hughes JS, Daley J, Mackiernan YD: Using severity-adjusted stroke mortality rates to judge hospitals. International Journal for Quality in Health Care. 1995, 7 (2): 81-94. 10.1016/1353-4505(95)00007-I.

    Article  CAS  PubMed  Google Scholar 

  26. Lilford R, Mohammed MA, Spiegelhater D, Thomson R: Use and misuse of process and outcome data in managing performance of acute medical care; avoiding institutional stigma. Lancet. 2004, 363: 1147-1154. 10.1016/S0140-6736(04)15901-1.

    Article  PubMed  Google Scholar 

  27. Dimick JB, Gilbert Welch H, Birkmeyer JD: The problem with small sample size. Journal of the American Medical Association. 2004, 292: 847-851. 10.1001/jama.292.7.847.

    Article  CAS  PubMed  Google Scholar 

  28. Hofer TP, Hayward RA: Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses?. Medical Care. 1996, 34 (8): 737-53. 10.1097/00005650-199608000-00002.

    Article  CAS  PubMed  Google Scholar 

  29. Zalkind DL, Eastaugh SR: Mortality rates as an indicator of hospital quality. Hospital and Health Services Administration. 1997, 42 (1): 3-15.

    CAS  PubMed  Google Scholar 

  30. Thomas JW, Hofer TP: Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Medical Care. 1999, 37 (1): 83-92. 10.1097/00005650-199901000-00012.

    Article  CAS  PubMed  Google Scholar 

  31. Romano PS, Zach A, Luft HS, Rainwater J, Remy LL, Campa D: The California Hospital Outcomes Project: using administrative data to compare hospital performance. Joint Commission Journal on Quality Improvement. 1995, 21: 668-682.

    CAS  Google Scholar 

  32. Pine M, Norusis M, Jones B, Rosenthal GE: Predictions of Hospital Mortality Rates: A Comparison of Data Sources. Annals of Internal Medicine. 1997, 126 (5): 347-354.

    Article  CAS  PubMed  Google Scholar 

  33. Schneeweiss S, Malcolm Maclure M: Use of comorbidity scores for control of confounding in studies using administrative databases. International Journal of Epidemiology. 2000, 29: 891-898. 10.1093/ije/29.5.891.

    Article  CAS  PubMed  Google Scholar 

  34. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF, for the QUOROM Group: Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. The Lancet. 1999, 354: 1896-1900. 10.1016/S0140-6736(99)04149-5.

    Article  CAS  Google Scholar 

  35. Fischoff B: Hindsight ≠ foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology. 1975, 1: 288-299.

    Google Scholar 

  36. Lilford RJ, Mohammed MA, Braunholtz , Hofer TP: The measurement of active errors: methodological issues. Journal of Quality and Safety in Health Care. 2003, 12 (suppl II): ii8-ii12.

    PubMed  Google Scholar 

  37. Bradley EH, Herrin J, Elbel B, McNamara RL, Magid DJ, Nallamothu BK, Wang Y, Normand S-LT, Spertus JA, Krumholz HM: Hospital Quality for Acute Myocardial Infarction: Correlation Among Process Measures and Relationship With Short-term Mortality. Journal of the American Medical Association. 2006, 296: 72-78. 10.1001/jama.296.1.72.

    Article  CAS  PubMed  Google Scholar 

  38. Best WR, Cowper DC: The ratio of observed-to-expected mortality as a quality of care indicator in non-surgical VA patients. Medical Care. 1994, 32: 390-400. 10.1097/00005650-199404000-00007.

    Article  CAS  PubMed  Google Scholar 

  39. Bulger EM, Nathens AB, Rivara FP, Moore M, MacKenzie EJ, Jurkovich GJ: Management of severe head injury: institutional variations in care and effect on outcome. Critical Care Medicine. 2002, 30 (8): 1870-6. 10.1097/00003246-200208000-00033.

    Article  PubMed  Google Scholar 

  40. Curtis JR, Ullman M, Collier AC, Krone MR, Edlin BR, Bennett CL: Variations in medical care for HIV-related Pneumocystis carinii pneumonia: a comparison of process and outcome at two hospitals. Chest. 1997, 112 (2): 398-405.

    Article  CAS  PubMed  Google Scholar 

  41. Dubois RW, Rogers WH, Moxley JH, Draper D, Brook RH: Hospital inpatient mortality. Is it a predictor of quality?. New England Journal of Medicine. 1987, 317 (26): 1674-1680.

    Article  CAS  Google Scholar 

  42. Freeman C, Todd C, Camilleri-Ferrante C, Laxton C, Murrell P, Palmer CR, Parker M, Payne B, Rushton N: Quality improvement for patients with hip fracture: experience from a multi-site audit. Quality & Safety in Health Care. 2002, 11 (3): 239-45. 10.1136/qhc.11.3.239.

    Article  CAS  Google Scholar 

  43. Gibbs J, Clark K, Khuri S, Henderson W, Hur K, Daley J: Validating risk-adjusted surgical outcomes: chart review of process of care. International Journal for Quality in Health Care. 2001, 13 (3): 187-196. 10.1093/intqhc/13.3.187.

    Article  CAS  PubMed  Google Scholar 

  44. Goldman RL, Thomas TL: Using mortality rates as a screening tool: the experience of the Department of Veterans Affairs. Joint Commission Journal on Quality Improvement. 1994, 20 (9): 511-522.

    CAS  Google Scholar 

  45. Hannan EL, Kilburn H, O'Donnell JF, Lukacik G, Shields EP: Adult open heart surgery in New York State. An analysis of risk factors and hospital mortality rates. Journal of the American Medical Association. 1991, 265 (16): 2066-2068. 10.1001/jama.265.16.2066.

    Google Scholar 

  46. Hartz AJ, Gottlieb MS, Kuhn EM, Rimm AA: The relationship between adjusted hospital mortality and the results of peer review. Health Services Research. 1993, 27 (6): 766-777.

    Google Scholar 

  47. Lowrie EG, Teng M, Lacson E, Lew N, Lazarus JM, Owen WF: Association between prevalent care process measures and facility-specific mortality rates. Kidney International. 2001, 60: 1917-1929. 10.1046/j.1523-1755.2001.00029.x.

    Article  CAS  PubMed  Google Scholar 

  48. Matsui K, Fukui T, Hira K, Sobashima A, Okamatsu S, Nobuyoshi M, Hayashida N, Tanaka S: Differences in management and outcomes of acute myocardial infarction among four general hospitals in Japan. International Journal of Cardiology. 2001, 78 (3): 277-84. 10.1016/S0167-5273(01)00387-4.

    Article  CAS  PubMed  Google Scholar 

  49. McNaughton H, McPherson K, Taylor W, Weatherall M: Relationship between process and outcome in stroke care. Stroke. 2003, 34 (3): 713-7. 10.1161/01.STR.0000057580.23952.0D.

    Article  PubMed  Google Scholar 

  50. Meehan TP, Hennen J, Radford MJ, Petrillo MK, Elstein P, Ballard DJ: Process and outcome of care for acute myocardial infarction among Medicare beneficiaries in Connecticut: a quality improvement demonstration project. Annals of Internal Medicine. 1995, 122: 928-936.

    Article  CAS  PubMed  Google Scholar 

  51. Mohammed MA, Mant J, Bentham L, Raftery J: Comparing processes of stroke care in high- and low-mortality hospitals in the West Midlands, UK. International Journal for Quality in Health Care. 2005, 17 (1): 31-6. 10.1093/intqhc/mzh088.

    Article  PubMed  Google Scholar 

  52. Mozes B, Shabtai E, Zucker D: Variation in mortality among seven hemodialysis centers as a quality indicator. Clinical Performance & Quality Health Care. 1998, 6 (2): 73-78.

    CAS  Google Scholar 

  53. Park RE, Brook RH, Kosecoff J, Keesey J, Rubenstein L, Keeler E, Kahn KL, Rogers WH, Chassin MR: Explaining variations in hospital death rates: randomness, severity of illness, quality of care. Journal of the American Medical Association. 1990, 264: 484-490. 10.1001/jama.264.4.484.

    Article  CAS  PubMed  Google Scholar 

  54. Peterson ED, Parsons LS, Pollack C, Newby LK, Littrell KA: Variation in AMI care processes across 1,085 US hospitals and its association with hospital mortality rates. Circulation. 2002, 106 (19): II-722.

    Google Scholar 

  55. Romano PS, Remy LL, Luft HS: Second Report of the California Hospital Outcomes Project (1996): Acute Myocardial Infarction Volume Two: Technical Appendix – chapter 014. Validation study of acute myocardial infarction study: results. Center for Health Services Research in Primary Care, University of California, Davis. 1996, (last accessed 5 February 2006), [http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1017&context=chsrpc]

    Google Scholar 

  56. As above, chapter 015. Validation study of acute myocardial infarction study: conclusions. (last accessed 5 February 2006), [http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1016&context=chsrpc]

  57. Thomas JW: Validating risk-adjusted outcomes as measures of quality of care in hospitals. Report prepared for the Minnesota Coalition on Health. 1991, Ann Arbor: University of Michigan School of Public Healthy, Department of Health Services Management and Policy

    Google Scholar 

  58. Thomas JW, Holloway JJ, Guire KE: Validating risk-adjusted mortality as an indicator of quality of care. Inquiry. 1993, 30: 6-22.

    CAS  PubMed  Google Scholar 

  59. Todd CJ, Freeman CJ, Camilleri-Ferrante C, Palmer CR, Hyder A, Laxton CE, Parker MJ, Payne BV, Rushton N: Differences in mortality after fracture of hip: the East Anglian audit. British Medical Journal. 1995, 310: 904-8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  60. Uphold CR, Deloria-Knoll M, Palella FJ, Parada JP, Chmiel JS, Phan L, Bennett CL: US hospital care for patients with HIV infection and pneumonia: the role of public, private, and Veterans Affairs hospitals in the early highly active antiretroviral therapy era. Chest. 2004, 125 (2): 548-56. 10.1378/chest.125.2.548.

    Article  PubMed  Google Scholar 

  61. Walker GJA, Ashley DE, Hayes RJ: The quality of care is related to death rates: hospital inpatient management of infants with acute gastroenteritis in Jamaica. American Journal of Public Health. 1988, 78 (2): 149-152.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  62. Weir N, Dennis MS: Scottish Stroke Outcomes Study Group: Towards a national system for monitoring the quality of hospital-based stroke services. Stroke. 2001, 32 (6): 1415-21.

    Article  CAS  PubMed  Google Scholar 

  63. Williams RL: Measuring the effectiveness of perinatal medical care. Medical Care. 1979, 17 (2): 95-110. 10.1097/00005650-197902000-00001.

    Article  CAS  PubMed  Google Scholar 

  64. Allison JJ, Kiefe CI, Weissman NW, Person SD, Rousculp M, Canto JG, Bae S, Williams OD, Farmer R, Centor RM: Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. Journal of the American Medical Association. 2000, 284 (10): 1256-62. 10.1001/jama.284.10.1256.

    Article  CAS  PubMed  Google Scholar 

  65. Baldwin LM, MacLehose RF, Hart LG, Beaver SK, Every N, Chan L: Quality of care for acute myocardial infarction in rural and urban US hospitals. Journal of Rural Health. 2004, 20 (2): 99-108. 10.1111/j.1748-0361.2004.tb00015.x.

    Article  Google Scholar 

  66. Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM: Do "America's Best Hospitals" perform better for acute myocardial infarction?. New England Journal of Medicine. 1999, 340: 286-292. 10.1056/NEJM199901283400407.

    Article  CAS  Google Scholar 

  67. Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM: Performance of the "100 top hospitals" : what does the report card report?. Health Affairs. 1999, 18 (4): 53-10.1377/hlthaff.18.4.53.

    Article  CAS  PubMed  Google Scholar 

  68. Chen J, Rathore SS, Radford MJ, Krumholz HM: JCAHO accreditation and quality of care for acute myocardial infarction. Health Affairs. 2003, 22 (2): 243-10.1377/hlthaff.22.2.243.

    Article  CAS  PubMed  Google Scholar 

  69. Gottwik MR, Zahn R, Schiele R, Schneider S, Gitt AK, Fraunberger L, Bossaller C, Glunz HG, Altmann E, Rosahl W, Senges J: Differences in treatment and outcome of patients with acute myocardial infarction admitted to hospitals with compared to without departments of cardiology; results from the pooled data of the Maximal Individual Therapy in Acute Myocardial Infarction (MITRA 1+2) Registries and the Myocardial Infarction Registry (MIR). European Heart Journal. 2001, 22 (19): 1794-801. 10.1053/euhj.2001.2630.

    Article  CAS  PubMed  Google Scholar 

  70. Keeler EB, Rubenstein LV, Kahn KL, Draper D, Harrison ER, McGinty MJ, Rogers WH, Brook RH: Hospital characteristics and quality of care. Journal of the American Medical Association. 1993, 268 (13): 1709-1714. 10.1001/jama.268.13.1709.

    Article  Google Scholar 

  71. Krumholz HM, Rathore SS, Chen J, Wang Y, Radford MJ: Evaluation of a consumer-oriented internet health care report card – the risk of quality ratings based on mortality data. Journal of the American Medical Association. 2002, 287: 1277-1287. 10.1001/jama.287.10.1277.

    PubMed  Google Scholar 

  72. Metnitz PG, Reiter A, Jordan B, Lang T: More interventions do not necessarily improve outcome in critically ill patients. Intensive Care Medicine. 2004, 30 (8): 1586-93. 10.1007/s00134-003-2154-8.

    Article  PubMed  Google Scholar 

  73. Pollack MM, Alexander SR, Clarke N, Ruttimann UE, Tesselaar HM, Bachulis AC: Improved outcomes from tertiary center pediatric intensive care: a statewide comparison of tertiary and nontertiary care facilities. Critical Care Medicine. 1991, 19 (2): 150-159. 10.1097/00003246-199102000-00007.

    Article  CAS  PubMed  Google Scholar 

  74. Venkatappa S, Murray CK, Bratzler DW: Coronary artery bypass grafting surgery in Oklahoma: processes and outcomes of care. Journal – Oklahoma State Medical Association. 2003, 96 (2): 63-69.

    PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

RL was supported by a Medical Research Council network grant on patient safety.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard J Lilford.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

RL conceived the idea of the systematic review. DP devised the search strategy and eliminated irrelevant papers. DP and MM independently reviewed the remaining abstracts to decide which to obtain original papers for. Subsequently DP and MM read selected papers and independently decided which met inclusion criteria. In cases where there was disagreement, RL also read the paper and final decision was by consensus. DP wrote the manuscript.

Electronic supplementary material

Additional File 1: Search strategy PDF 11K (PDF 12 KB)

Additional file 2: Studies that directly correlate processes of care with risk-adjusted mortality PDF 44K (PDF 44 KB)

Additional file 3: Studies that indirectly correlate process with outcome PDF 29K (PDF 30 KB)

12913_2006_432_MOESM4_ESM.pdf

Additional file 4: Relationship between risk-adjusted mortality and processes of care by several strata – type of correlation, condition and organisation. Bracketed () figures are the most optimistic intuitive count if the three studies from Additional File 2 are included in which the relationship between quality of care and mortality was influenced by one outlier hospital. PDF 17K (PDF 17 KB)

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pitches, D.W., Mohammed, M.A. & Lilford, R.J. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 7, 91 (2007). https://doi.org/10.1186/1472-6963-7-91

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-7-91

Keywords