Interventions: help from other people
Author (year) | Study type and participants | Intervention | Outcome measures | Results | Conclusions | Outcomes Rating | Strength of conclusions (1–5) |
Second opinions in pathology | |||||||
Raab et al24 (2008) | Before/after with expert cytologists | Use of second readings pre sign-out at three instutions, comparing random reviews to organ-targeted reviews. | Proportion of diagnostic errors detected. | Few diagnostic errors detected; no significant differences among sites; Tissue-specific reviews yielded higher error rates than random reviews. | Tissue-specific reviews yielded higher error rates than random reviews. | 2b | 2 |
Raab et al25 (2006) | Before/after with expert pathologists | Second reading of pathology cases. Random review of 5% of cases and focused review of all cases. | Per cent of diagnostic errors. Impact of difference on patient care. | Focused review detected approximately four times more diagnostic errors than 5% random review. The majority of errors in both groups did not lead to patient harm or resulted in low-grade harm. | Second opinion reviews can be a method to standardise diagnostic practice. | 4a | 3 |
Manion et al26 (2008) | Before/after with expert pathologists | Second reading of pathology slides received from an external organisation. | Rate of diagnostic variation and change in patient management due to second reading. | No disagreement in majority; minor disagreement in small %; major disagreement in very small % of cases. Change in management plan in half of cases with major disagreement. | Mandatory second opinion of surgical pathology may be a beneficial patient care practice. However, upon disagreement, it is not clear how often the second opinion was correct due to inconclusive chart reviews. | 4a | 2 |
Nordrum et al27 (2004) | Before/after with expert pathologists | Use of still images in second opinion of pathology cases. | Diagnostic accuracy rate (glass slides vs still images). | Nearly the same diagnostic accuracy rate with still images and glass slides. | Using still images to diagnose cases appears to be comparable to using glass slides, thus increasing ease of obtaining second opinions. | 3 | 3 |
Hamady et al28 (2005) | Before/after with an expert surgeon, oncologist and pathologist | Use of second opinions of a multidisciplinary team of clinicians. | Percentage of second opinions resulting in different diagnosis. | Complete agreement in majority of cases. Disagreement a small % of the time. | Diagnostic and therapeutic discrepancies can occur when multiple experts review the same patient case. It is unclear if the second opinion leads to better outcomes. | 4a | 4 |
Second opinions in radiology | |||||||
Benger and Lyburn29 (2003) | Before/after with ER and radiology staff | Second reading of radiographs by the radiology staff for x-rays processed by ER. | Rate of diagnostic agreement. Clinical impact of diagnostic discrepancy. | Very small amount of discrepancies that required minuscule change in management. | The low rate of significant misread radiographs suggests incorporation of selective second readings may be warranted. | 4a | 3 |
Espinosa and Nolan30 (2000) | Before/after with ER physicians and radiologists | x-Rays read by ER physician and radiologist. | Radiograph interpretation error and number of potential adverse events. | Interpretation error rate and potential adverse effects decreased (based on reliability model not raw data). | Procedures for interpreting radiographs designed to mitigate errors can reduce the adverse events. Without a control group it is difficult to know if improvement is from intervention. | 4a | 2 |
Duijm et al31 (2007) | Before/after with mammography technicians and radiologists | Second reading of mammograms by technologists, along with standard double reading by radiologists. | Breast cancer detection and positive predictive value (PPV) of referral. | Modest increase in cancer detection and modest decrease in PPV. | Adding second reading by technologists may be effective in detecting more breast cancer cases. Readings should be considered for referral due to high prevalence of breast cancer. | 4a | 4 |
Kwek et al32 (2003) | Before/after with expert pathologists | Blinded second readings in mammography. | Rate of cancer detection, patient recall, rate of biopsy and mean second screener contribution. | Low increase of cancer detection. Recall rate increased modestly. Biopsy rate slightly increased. Efficiency of second reader minimal. | Second reading of mammograms is recommended for breast cancer screening if resources are available. | 4a | 4 |
Canon et al33 (2003) | Before/after with expert radiologists | Second review of barium enema tests. | Detection of polyps. | Second reading failed to improve detection of polyps. | Routine second reading is not warranted for barium enema examination. | 4a | 4 |
Help from groups and librarians | |||||||
Christensen et al34 (2000) | Non-randomised controlled trial with clinical teams | Team diagnostic decision-making where members were given shared or private information that the group needed to share for correct diagnoses. | Diagnostic error rate. | Diagnostic errors increased when team members held private information. | Lack of sharing data may be detrimental to diagnostic accuracy. Clinical decisions relying on privately held information are susceptible to errors. | 2b | 4 |
Mulvaney et al35 (2008) | Randomised control trial with clinical teams | Use of evidence based informatics tool that provides research evidence to inform clinicians of patient care practices. | Impact on patient care practices and clinical actions, articles read, satisfaction of search results, consultations, time to obtain evidence, clinician searches. | Tool had significant impact on users' report of future patient care, satisfaction of articles returned and amount of time spent receiving evidence. No significant impact on other items. | Informatics tools may facilitate use of research evidence and influence clinical actions. However, data regarding effects on patients are unknown as a result of this study. | 3 | 4 |
Outcome Ratings reflect the level of impact for each intervention on reducing diagnostic errors.9 ,10 Strength of Conclusions was rated on a numerical scale (1–5) in accordance with Best Evidence in Medical Education guidelines (5=strongest).9 ,11 ER; Emergency Room.