Association for Academic Surgery, 2008
A Report Card System Using Error Profile Analysis and Concurrent Morbidity and Mortality Review: Surgical Outcome Analysis, Part II

https://doi.org/10.1016/j.jss.2008.02.051Get rights and content

Background

An effective report card system for adverse outcome error analysis following surgery is lacking. We hypothesized that a memorialized database could be used in conjunction with error analysis and management evaluation at Morbidity & Mortality conference to generate individualized report cards for Attending Surgeon and System performance.

Study Design

Prospectively collected data from September 2000 through April 2005 were reported following Morbidity & Mortality review on 1618 adverse outcomes, including 219 deaths, following 29,237 operative procedures, in a complete loop to approximately 60 individual surgeons and responsible system personnel.

Results

A 40% reduction of gross mortality (P < 0.001) and 43% reduction of age-adjusted mortality were achieved over 4 years at the Academic Center. Quality issues were identified at a rate three times greater than required by New York State regulations and increased from a baseline 4.96% to 32.7% (odds ratio 1.94; P < 0.03) in cases associated with mortality. A detailed review demonstrated a significant increase (P < 0.001) in system errors and physician-related diagnostic and judgment errors associated with mortality highlighted those practices and processes involved, and contrasted the results between academic (43% mortality improvement) and community (no improvement) hospitals.

Conclusions

The findings suggest that structured concurrent data collection combined with non-punitive error-based case review and individualized report cards can be used to provide detailed feedback on surgical performance to individual surgeons and possibly improve clinical outcomes.

Introduction

Nascent methodologies are evolving to enhance quality via health care team awareness and feedback. For example, both public report cards and confidential, peer collaboration represent distinct and complimentary approaches to quality assessment and improvement [1]. Public report cards are usually based on objective performance measures, such as HEDIS and CMS indicators, mortality, length of stay, and resource utilization; such measures are used as a reflection of overall quality. Public report data are collected and presented impersonally with no interaction with individual surgeons who receive little feedback. Conversely, collaborative peer education usually occurs in Morbidity and Mortality (M&M) conference, a personal and relevant educational forum in which to intelligently discuss both subjective and objective surgical judgment, technique, and patient management issues [2]. Many surgeons rely solely on complication review at M&M conferences to gain experience with quality improvement techniques, evaluate experience, and critique outcomes [2, 3].

Over 100 years ago, Dr. Ernest A. Codman, “a surgeon trained at Harvard and Massachusetts General Hospital developed a data system using file cards that linked medical intervention to long-term patient outcomes. His system concentrated on treatments that failed to produce a desired effect, and he assigned each complication to categories such as error in technique, error in management, or patient's disease” [4]. Dr. Codman left the Massachusetts General Hospital amid criticism generated by his methodology, and to this day, systematic physician report cards based on cumulative data derived from comprehensive adverse outcome review are lacking. The National Surgical Quality Improvement Program (NSQIP) does gather adverse outcome data using chart abstraction and provides hospital-level risk-adjusted M&M rates for a defined group of adverse outcomes at this time. Hutter [5] has demonstrated improved reporting of complication and death rates using NSQIP as the data-gathering platform, but did not show a significant increase in number of identified cases actually reviewed at M&M. No system, to our knowledge, integrates adequate sampling, in-depth clinical review, error analysis, and management assessment/critique into a comprehensive “report card” reflecting clinical performance at the individual surgeon level. This paper will discuss our efforts and results for production of a comprehensive surgeon report card system based on clinical review of adverse outcomes following surgery using the M&M conference as the peer review method, and a relational database for non-punitive error analysis and reporting.

Section snippets

Methods

Beginning in 2001, we implemented a policy of mandatory review of all adverse outcomes and deaths at Surgical Morbidity and Mortality Conference for each of the three hospitals and one ambulatory care facility within Beth Israel Medical Center (BIMC), the second largest healthcare system in New York City. Age-adjusted mortality rates were calculated, and severity grade stratification for all adverse outcomes was performed [6].

Surgeon Report Card Format

Individual and Hospital Report cards consisted of four components:

  • The first component was a summary of case numbers, complications rates, mortality rates, identified quality issues, Surgical QI committee referrals, number of NYPORTS, number of Root Cause Analyses (RCAs), Error Summary and Management Assessment (Appropriate/No Quality Issues, Medical Management Controversial, or Potential Quality Issue).

  • The second component of the report card was a Complication Profile broken down by System

Error Analysis and Management Review

From January 2002 through December 2004, all cases with adverse outcomes and interesting cases were reviewed at M&M conferences weekly, and 1440 adverse outcomes were identified in 1049 cases. Additionally, 219 cases were presented for academic interest. From these cases, 189 surgeon report cards were issued to 63 active surgeons, and management review was judged appropriate, controversial, or a potential quality issue in 81%, 14%, and 5% of cases, respectively (Table 1). Physician and system

Discussion

Challenges within the surgical quality arena revolve primarily around issues related to the adequacy of reporting (i.e., is the sample size of adverse outcomes identified sufficient and accurate enough to be reflective of population as a whole), and risk adjustment, to support comparative benchmarking. Case critique, review, and formal data sharing have received less attention because of the significant effort required to complete detailed review and the variability of review methodology. We

Acknowledgments

We thank J. Thorsen, M. Garcia, A. Thorne, and M. Gold for their tremendous efforts and dedication to improving the quality of care and safety of our patients. The authors acknowledge the critical input of Michael Cantor, M.D., Harold Laufman, M.D., and Eric Schneider, M.D., M.Sc., for their constructive comments and critique of this manuscript. Dr. Antonacci and Mr. Lam report holding equity ownership in Outcomes Management Systems.

Cited by (52)

  • A Sisyphean Task for Residents: Preparing Literature Reviews About Adverse Events Presented at Morbidity and Mortality Conferences

    2022, Journal of Surgical Education
    Citation Excerpt :

    Ultimately, the goal is to avoid a repeat of history should a similar situation arise in the future. Numerous aspects of M&M's have been investigated, including case capturing improvement7,8 and adverse event category standardization.7,9,10 Other studies have focused on the use of M&M data as a way to improve patient care,11 or to evaluate core competencies of the Accreditation Council for Graduate Medical Education (ACGME).12

View all citing articles on Scopus
View full text