Article Text

Download PDFPDF

Setting the record straight on measuring diagnostic errors. Reply to: 'Bad assumptions on primary care diagnostic errors’ by Dr Richard Young
  1. Hardeep Singh1,
  2. Dean F Sittig2
  1. 1 Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, Texas, USA
  2. 2 University of Texas School of Biomedical Informatics and the UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas, USA
  1. Correspondence to Dr Hardeep Singh, VA Medical Center (152), 2002 Holcombe Blvd, Houston, TX 77030, USA; hardeeps{at}bcm.edu

Statistics from Altmetric.com

Upon reading Dr Young's letter,1 we felt we should have prefaced our article by quoting Box and Draper who wrote in their classic 1987 book,2 ‘all models are wrong, but some are useful’. Our goal in developing a new model for safer diagnosis in healthcare was to illustrate the myriad, complex, socio-technical issues and their interactions within a complex adaptive healthcare system that must be considered when attempting to define and measure errors in the diagnostic process.3 While the comments by Dr Young provide one clinician's view of the complexity and breadth of diagnostic error, we welcome an opportunity to respond to clarify the premise of the Safer Dx framework. Dr Young is concerned that we presented the problem of diagnostic error as black and white rather than considering day-to-day realities of patient care that includes vast uncertainties in data collection, interpretation and synthesis. He further asserts that the concept of delayed diagnosis in primary care needs to be severely curtailed and that except for straightforward cases of blatant negligence we should not even use the language of delayed or missed diagnoses. Lastly, he writes that physicians might be vilified for missed or delayed diagnosis even though they made appropriate and informed decisions including watchful waiting when the diagnosis was not yet clear. In our response below, we attempt to lay many of these concerns to rest and provide further information on diagnostic error reduction research. We believe the concepts that we clarify herein will reassure hardworking primary care clinicians on the frontlines that the major goal of our framework is to promote the understanding and reduction of preventable diagnostic harm to our patients and not assign blame. Moreover, the rationale for the framework and our response applies to all clinicians, not just those in primary care.

First, we would like to clarify that the goal of the paper was to establish a foundation for using a systems-based approach to advance the science of measurement of diagnostic error rather than to debate the challenges of defining diagnostic error in individual patients, which we agree are plentiful, as seen in our research.4–6 Frameworks allow researchers and other stakeholders to have a high-level, conceptual understanding of the problem they are trying to solve as well as help them consider the many moving parts and their relationships that influence the problem. To specifically respond to Dr Young's concern, we explicitly mentioned the ‘difficult conceptualisation of the diagnostic process’ as a problem in the introduction of the paper as well as acknowledged that ‘diagnosis evolves over time and is not limited to events during a single provider visit’. We also stated the need to address the problem of diagnostic error through better measurement tools and rigorous definitions. This conceptual approach will lead to a more robust understanding of what the diagnostic process entails and what breakdowns exist, especially beyond the patient-provider level, such as at the system level. Over the last 15 years since the ‘To Err Is Human’ Institute of Medicine report,7 there has been little progress in reducing diagnostic errors, which consistently show up in many studies as a prominent reason for preventable harm to patients.8–12 One of the many gaps contributing to this lack of progress is precisely the poor understanding of the entire diagnostic measurement process that we are trying to improve with the Safer Dx framework. The framework helps ensure that improvement efforts are consistent within the larger systems-based approach in improving patient safety.13 As we mention in our paper, ‘high quality diagnostic performance requires both a well-functioning healthcare system along with high-performing individual providers and care teams within that system’.

By borrowing concepts from non-healthcare-related disciplines (such as human factors and medical informatics) to improve systems, we are ensuring that the many moving parts of our framework, including tools and technologies, can best fit within and support the physician's work environment in order to improve patient safety.14–17 In fact, one of the paper's authors (DFS) is a clinical informatician who has worked for >25 years to improve patient care through the use of better information systems and technology.

Second, our paper uses a ‘real-world’ definition of diagnostic error18 and acknowledges some of the uncertainties and evolution in the diagnostic process that Dr Young writes about. This definition has been developed through several large studies over the past decade4–6 by one of the authors (HS), who is a practising internist with primary care experience in both academic and rural settings. All of these studies have illustrated the many ‘grey zones’ related to diagnostic error. Although a briefer version of the definition of diagnostic error is mentioned in the paper, the following passage from the original citation we referenced in our paper18 might help clarify how we contextualise the concept of errors as missed opportunities to promote learning and improvement rather than to assign blame:Although it's tempting to assign responsibility for a diagnostic error to a single clinician, research suggests that the interplay of both system and cognitive contributory factors is almost universal. Thus, in our work within our multidisciplinary research group, we have shifted toward re- branding diagnostic errors as “missed opportunities”. While our research team continues to refine definitions and measurement, we have found the following three criteria useful in defining diagnostic errors: 1. Case Analysis Reveals Evidence of a Missed Opportunity to Make a Correct or Timely Diagnosis. The concept of a missed opportunity implies that something different could have been done to make the correct diagnosis earlier. The missed opportunity may result from cognitive and/or system factors or may be attributable to more blatant factors, such as lapses in accountability or clear evidence of liability or negligence. 2. Missed Opportunity Is Framed Within the Context of an “Evolving” Diagnostic Process. The determination of error depends on the temporal or sequential context of events. Evidence of omission (failure to do the right thing) or commission (doing something wrong) exists at the particular point in time at which the “error” occurred. 3. The Opportunity Could Be Missed by the Provider, Care Team, System, and/or Patient. A preventable error or delay in diagnosis may occur due to factors outside the clinician's immediate control or when a clinician's performance is not contributory. This criterion suggests a system-centric versus physician-centric approach to diagnostic error. Reframing diagnostic errors as missed opportunities in diagnosis could help shift attention and resources from attributing blame to learning from these scenarios.

Thus, case examples that would qualify as errors would include failure to evaluate a patient further despite the presence of red flag symptoms for cord compression,19 failure to notify a patient and follow-up on their abnormal chest X-ray report suggestive of cancer leading to delay in cancer diagnosis,20 and delays in diagnosis due to breakdowns in the referral process.21 ,22 Instances of appropriate and informed decisions including watchful waiting when the diagnosis is not clear will not be categorised as diagnostic errors with this definition. Furthermore, one of the authors (HS) has recently outlined the many challenges of defining diagnostic error in a separate paper that discusses concepts such as diagnosis is not always black and white, watchful waiting and overzealous diagnostic pursuits.23 At the risk of duplicating our previous publication materials and due to journal space constraints, we did not offer this level of detail while describing the Safer Dx framework but are pleased to use this opportunity to do so. In summary, this approach acknowledges the complexity of identifying diagnostic errors, addresses the concern that we need to contextualise diagnostic errors within real-world clinical settings and highlights our responsibility to continually improve.

Lastly, we disagree that the paper makes attempts to blame and vilify primary care physicians. We would hope that concepts we mention such as feedback and learning from missed opportunities, which are important concepts in patient safety, are not misconstrued as blame or vilification. In fact, in a recent opinion editorial in a national newspaper,24 HS further called for the need to address system issues such as time pressures, administrative burden, lack of support tools and slow innovations with electronic health records in order to improve diagnosis through better patient–physician interactions. We recognise the value of primary care and the difficulties that come with its practice and hope that advancing the science of measuring diagnostic errors will bring more attention and resources to help reduce preventable harm to our patients.

References

View Abstract

Footnotes

  • Twitter Follow Dean Sittig at @DeanSittig and Hardeep Singh at @HardeepSinghMD

  • Funding HS was supported by the VA Health Services Research and Development Service (CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety and the Agency for Health Care Research and Quality (R01HS022087). This work was supported in part by the Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413).

  • Disclaimer The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or the University of Texas.

  • Competing interests None.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles