Intended for healthcare professionals

Editorials

Every system is designed to get the results it gets

BMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7113.897 (Published 11 October 1997) Cite this as: BMJ 1997;315:897

So taking only one element out of it may not improve anything

  1. Gerald T O'Connor, Professor of Medicine and of Community and Family Medicinea
  1. a Center for the Evaluative Clinical Sciences, Dartmouth Medical School, Hanover, NH 03755-3863, USA

    Patients and public assume that the surgeon is responsible for the quality of surgical care and that they are protected from substandard care by quality monitoring conducted by professional bodies. These bodies are often presented with evidence suggesting suboptimal clinical care and rule on its validity. Such is the case of a cardiothoracic surgeon from Bristol who has been cited for having unacceptably poor results which comes before Britain's regulatory body, the General Medical Council, next week (in a case expected to last four months). These cases occur with some frequency and often concern high visibility specialties with easy to count outcomes. Yet causal attribution is difficult since most clinicians provide care in complex settings over which individuals exert only limited control.

    In cardiac care the skills of the anesthesiologist, perfusionist, cardiac intensive care nurse, and others also affect the outcomes of care. Their individual competence is not sufficient: they must also work well together. It is the product of their individual work—not the sum—that the patient experiences. Removing one “outlier” surgeon from practice will, at most, influence the second decimal place of the national cardiac surgery mortality rate. It may be necessary for the public welfare that we do this, but the public should not be led to believe that such actions do much to improve the quality of care or reduce the risk of cardiac surgery. Furthermore, we must be aware of the predictable and pernicious effect of this action on other practitioners, who may feel that “there but for the grace of God go I.”

    Much of the controversy that swirls around quality monitoring has to do with its methods. Assessing the outcomes of clinical care is difficult and relies on the methods of observational epidemiology. The primary threats to the validity of observational studies are chance, bias, and confounding.1 Each plays a potentially important role in measuring the outcomes of cardiac surgery. Random variability is important in rare outcomes. If a cardiac surgeon performs 140 coronary bypass procedures in a year there will, on average, be four deaths (2.9% mortality rate). The 95% confidence interval extends from 0.8 to 7.2%.2 The imprecision of the estimate will be even greater for surgeons who do fewer procedures. One solution to sparse data is to aggregate results over a longer period, yet this may obscure important time related effects. The use of control charts, which allow us to differentiate between sampling related variability and actual process change may help in monitoring quality.3 Bias is systematic error in the data. Discipline and the enthusiastic participation of clinicians will be required to yield accurate datasets. Confounding is a distortion of observed mortality rates brought about by differences in case mix. This is perhaps the most debated aspect of assessing the outcomes of clinical care, though it may not deserve as much attention as the complex system of causation that produces a clinical outcome.

    If chance, bias, and confounding are dealt with successfully the result will be a valid measurement system. In fact, with respect to the mortality associated with coronary artery surgery, there is relatively secure science. This is evidenced by consensus on important risk variables4 and good performance of most multivariate risk models.5 When cardiac surgery centres have been carefully examined substantial variability in the processes of care6 and clinical outcomes7 8 has been found. Indeed, it would be surprising if it were not, given the variation in organisation, training, experience, habit, and setting. In this variability we will find the clues to improved clinical care. Improvement entails examining flawed systems and interactions among people as well as individual competencies.

    These observations then guide redesign of the processes of care. Without redesign, the same system and processes that have created the current reality will work together to repeat it, even if a single participant is removed. Many barriers to genuine quality improvement originate in the craftsman type organisation of clinical care. Individual practitioners are expected to keep abreast of advances, attend to the details of clinical care, and learn from their experience. The responsibility is clearly placed but the infrastructure is often inadequate to accomplish the task. The relative isolation of clinical practice, the lack of trusted real time clinical measurement systems, and the difficulty of learning from the few adverse outcomes that do occur in actual practice are important barriers to improving clinical practice.

    We have gained some experience in developing an infrastructure for improving the quality of cardiac surgery in Maine, New Hampshire, and Vermont. All cardiac surgeons in this region contribute data on every case. The datasets are validated regularly, and reports are distributed and discussed three times a year. Specific studies and site visits by multidisciplinary teams9 are used to generate and test hypotheses and effect changes in the processes of care.10 The participation of clinicians for over a decade has confirmed that they care deeply about the quality of care. We have developed a regional infrastructure to examine processes, to use data for improvement, and to learn from daily practice. We have learnt from Deming,11 Shewhart,12 Berwick,13 and Nelson and Batalden14 that there are certain prerequisites for this type of activity. Foremost is a safe place to work. The data necessary to improve clinical care cannot be used to punish individuals who participate in the quality improvement efforts. We also need an agreed metric for outcomes and a forum to discuss results. The number of adverse outcomes in the experience of any particular clinician is simply too small to inform decisions. Lastly, we need comparative knowledge of the processes of care associated with outcomes so that clinicians can learn from each other. In our experience, this process has been multidisciplinary, scientifically rigorous, inexpensive, and enjoyable.

    Continual improvement of health care is a goal shared by society, payers, and clinicians. It is essential for the public good that clinicians assume responsibility for improving the quality of clinical care. They have unique knowledge of clinical reasoning and processes, are most appropriate to render opinion on the adequacy of clinical care, and have traditionally assumed the role of patient advocates. Sustained incremental improvement will require leadership by professional societies in each specialty and commitment to develop the necessary infrastructure. This approach holds great potential to save lives, improve functional health status, and increase the efficiency of clinical care. The focus on inspection of individual outcomes and punishment may occasionally be necessary to protect public safety, but it is not enough genuinely to improve health care.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.