Article Text

Common contributing factors of diagnostic error: A retrospective analysis of 109 serious adverse event reports from Dutch hospitals
  1. Jacky Hooftman1,2,
  2. Aart Cornelis Dijkstra3,
  3. Ilse Suurmeijer4,
  4. Akke van der Bij5,
  5. Ellen Paap3,
  6. Laura Zwaan6
  1. 1Department of Public and Occupational Health, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
  2. 2Quality of Care, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
  3. 3Knowledge Institute, Dutch Association of Medical Specialists, Utrecht, The Netherlands
  4. 4Faculty of Health Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
  5. 5Department of Microbiology and Immunology, Diakonessenhuis, Utrecht, The Netherlands
  6. 6Institute of Medical Education Research Rotterdam, Erasmus Medical Centre, Rotterdam, The Netherlands
  1. Correspondence to Dr Laura Zwaan, Institute of Medical Education Research Rotterdam (iMERR), Erasmus Medical Centre, Dr. Molewaterplein 40, 3015GD, Rotterdam, The Netherlands; l.zwaan{at}


Introduction Although diagnostic errors have gained renewed focus within the patient safety domain, measuring them remains a challenge. They are often measured using methods that lack information on decision-making processes given by involved physicians (eg, record reviews). The current study analyses serious adverse event (SAE) reports from Dutch hospitals to identify common contributing factors of diagnostic errors in hospital medicine. These reports are the results of thorough investigations by highly trained, independent hospital committees into the causes of SAEs. The reports include information from involved healthcare professionals and patients or family obtained through interviews.

Methods All 71 Dutch hospitals were invited to participate in this study. Participating hospitals were asked to send four diagnostic SAE reports of their hospital. Researchers applied the Safer Dx Instrument, a Generic Analysis Framework, the Diagnostic Error Evaluation and Research (DEER) taxonomy and the Eindhoven Classification Model (ECM) to analyse reports.

Results Thirty-one hospitals submitted 109 eligible reports. Diagnostic errors most often occurred in the diagnostic testing, assessment and follow-up phases according to the DEER taxonomy. The ECM showed human errors as the most common contributing factor, especially relating to communication of results, task planning and execution, and knowledge. Combining the most common DEER subcategories and the most common ECM classes showed that clinical reasoning errors resulted from failures in knowledge, and task planning and execution. Follow-up errors and errors with communication of test results resulted from failures in coordination and monitoring, often accompanied by usability issues in electronic health record design and missing protocols.

Discussion Diagnostic errors occurred in every hospital type, in different specialties and with different care teams. While clinical reasoning errors remain a common problem, often caused by knowledge and skill gaps, other frequent errors in communication of test results and follow-up require different improvement measures (eg, improving technological systems).

  • diagnostic errors
  • patient safety
  • hospital medicine

Data availability statement

No data are available.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • The impact of diagnostic errors on patient harm and patient safety is larger than other types of errors. While diagnostic errors have gained more attention over the last decades, research has largely focused on methods that have little consideration for the decision-making processes of healthcare professionals.


  • In this study, serious adverse event (SAE) reports of diagnostic errors were analysed, which are based on a thorough investigation of the SAE by a trained, multidisciplinary hospital committee, including interviews with involved healthcare professionals and patients and/or family members. Results show that both gaps in knowledge and skill and coordination and monitoring failures played a role, especially with regard to patient follow-up, or follow-up of (abnormal) test results. Issues with the electronic health record (EHR) seemed to play a role in these types of error as well.


  • These results show that improvement strategies should focus on improving knowledge and skills, for example, by exposure to and practice with a large variety of clinical cases. In addition, interventions related to ‘closing-the-loop’ are important, that is, improving communication as well as systems that ensure information handover during transitions of care, communicating test results and follow-up. The role of the EHR in the diagnostic process as both a cause of errors and as a potential solution should be further investigated.


Diagnostic errors, that is, missed, delayed or incorrect diagnoses, are estimated to occur in 5–15% of the patient visits and admissions and can cause severe morbidity and mortality.1–7 Since the landmark report on improving diagnosis in healthcare was published by the National Academies of Science, Engineering and Medicine, research on diagnostic errors has received a renewed focus within the field of patient safety.5 6

Despite the renewed focus, research into diagnostic error and diagnostic safety remains challenging because of the complexity of diagnostic errors and the challenges of measuring them.1 8 Diseases and the diagnostic process evolve over time which can make it difficult to establish at what point a disease could and should have been diagnosed.8 Choices for diagnostic testing are made with a consideration for underdiagnosis and overdiagnosis, causing certain diagnoses to be missed to prevent overdiagnosis.8 Furthermore, reviewing whether a diagnostic error happened always occurs retrospectively, making them susceptible to hindsight bias.8 9 Additionally, diagnostic errors, more than other error types, have multiple factors contributing to the error, including the context in which the diagnostic process takes place.4 10 They often involve a combination of cognitive, organisational and technical problems, with cognitive errors being the most common.11

Most existing research into diagnostic error comprises methods such as record reviews, which often lack input from involved clinicians and/or the patient or family.4 11 12 Consequently, these methods have insufficient information to evaluate decisions in the diagnostic process. This makes it challenging to assess the causes of diagnostic errors and rationale behind the errors in the diagnostic reasoning process. Previous research using record reviews in combination with interviews with physicians has shown to be successful in gathering insightful information on diagnostic reasoning processes.13

Hospitals in the Netherlands are required by law to extensively analyse serious adverse events (SAEs) in order to identify root causes and improve patient care. SAEs, in this context, are defined as unintended or unexpected events resulting in temporary or permanent disability, death or prolonged care, caused by healthcare management rather than the disease.12 For each SAE that occurred in a Dutch hospital, a report must be submitted to the Dutch Health and Youth Care Inspectorate, where the content and quality of the report are evaluated.14 15 These reports contain extensive analyses of the SAEs, including in-depth interviews with all involved clinicians, and the patient and/or family members. Therefore, these reports contain more extensive information regarding the context, cognitive processes and the causes of the SAEs than would be available in health records alone. Previous research using SAE reports has shown that these reports are most suitable for analysing diagnostic error events.16

In the current study, we aimed to collect diagnostic SAE reports from a wide range of Dutch hospitals and analyse these reports using established tools for analysing diagnostic error, in order to better understand the contributing factors of diagnostic errors in Dutch hospitals.


This study included SAE reports from hospitals in the Netherlands. The reports are composed by a multidisciplinary, independent hospital committee, which is highly trained to perform root cause analyses. The committee has access to hospital guidelines and protocols and performs extensive analyses of the SAEs. This includes in-depth interviews with involved clinicians and patient and/or family members. The findings are discussed and written down in the report. These reports are mandatory whenever an SAE occurs in a Dutch hospital, and the content and quality are evaluated by the Dutch Health and Youth Care Inspectorate.14 15

Participating hospitals: recruitment and inclusion criteria

We aimed to include a representative sample of hospitals, including academic, teaching and general hospitals from different regions in the Netherlands. We therefore aimed at including at least 20 different hospitals, a similar number as other representative studies in the Netherlands.17 For analysing causes, it is recommended to include at least 50 reports to account for variety between cases.18 To ensure capturing variety between cases, hospital types and regions in the Netherlands, we aimed to include 100 diagnostic SAE reports from at least 20 different hospitals. Participating hospitals were asked to provide up to four SAE reports concerning a diagnostic error that occurred between 2018 and 2021.

All general hospitals, teaching hospitals and university hospitals (71 in total) in the Netherlands were invited to participate in this study via a letter to the board of directors. All medical departments and specialties were included, except for the emergency department; this department was recently described in a similar study.16 19 The submitted reports were evaluated and incomplete, unclear, duplicate and out-of-scope reports (eg, judged as no SAE, SAE occurred at emergency department) were removed. It is important to note that hospital care in the Netherlands includes both inpatient and outpatient care (eg, diagnostic tests, check-up visits). Both were included in this study. After the initial letters, both numbers were exceeded and therefore no reminders were sent.

Analysis of the SAE reports

All included SAE reports were analysed using a variety of established tools and taxonomies, that is, Safer Dx Instrument,20 Generic Analysis Framework,19 Diagnostic Error Evaluation and Research (DEER) taxonomy3 and Eindhoven Classification Model (ECM). These instruments were combined in a specifically designed Access database form (Office 365, Microsoft). Two reviewers, who were research interns with clinical knowledge (ACD and IS), were trained and educated on the use of the different instruments. They analysed the first 10 reports independently. The two reviewers and three researchers (AvdB, EP and LZ) subsequently discussed the discrepancies during a consensus meeting and formulated criteria for the analysis of the remaining reports. Subsequent reports were analysed by one researcher (ACD or IS) according to these criteria. Uncertainties were discussed with at least three researchers until consensus was reached.

Safer Dx Instrument

The Safer Dx Instrument20 is a 13-item instrument that can be used to determine whether a diagnostic error occurred. The instrument has 12 items that are each focused on a specific part of the diagnostic process. The concluding 13th item determines whether there was a ‘missed opportunity to make a correct and timely diagnosis’.20 All items were judged on a 7-point scale (1=strongly disagree, 4=neutral, 7=strongly agree). An SAE was considered a diagnostic error when the score was ≥4 on the concluding item, in line with guidelines for the use of the Safer Dx Instrument20 recommendations and previous studies.10 16

Generic Analysis Framework and outcome classification

A Generic Analysis Framework was adapted from a study by Baartmans and colleagues.19 The framework was designed to summarise SAE reports and was used in this study to systematically extract general information relating to the patient and the hospital visit (eg, patient characteristics, reason for visit, hospital type, involved healthcare professionals). The items in the framework could be directly obtained from the SAE reports and did not require interpretation by the researchers.

In addition to the Generic Analysis Framework, consequences of the SAEs were classified on a 7-point scale outlining the outcome in terms of disability or death (ie, 1=no disability, 2=minimal disability, 3=transient disability, recovery period 1–6 months, 4=transient disability, recovery period 6–12 months, 5=up to 50% permanently disabled, 6=more than 50% permanently disabled, 7=death), as used in a Dutch multicentre adverse event record review study.21

DEER taxonomy

The DEER taxonomy3 is a categorisation of diagnostic errors based on commonly accepted steps of the diagnostic process (ie, access to care, history taking, physical examination, diagnostic testing, assessment, consultation/referral and follow-up). Each category consists of multiple subcategories (see online supplemental appendix 1). The DEER taxonomy has frequently been used in a variety of medical fields and settings3 10 16 to categorise where errors occurred in the diagnostic process and what went wrong.

Supplemental material

Multiple (sub)categories could be assigned to one SAE. However, causally linked errors were only categorised at their initial point of failure. For example, if an error during physical examination led directly to ordering the wrong diagnostic test, it would only be categorised as an error during physical examination.

Eindhoven Classification Model

The factors contributing to the occurrence of these errors were described using the ECM.22 23 This model classifies contributing factors as human, organisational, technical or other (including patient-related factors), each with a distinct set of underlying subclassifications (see table 1).

Table 1

Classification of contributing factors according to the Eindhoven Classification Model22 23

Data extraction and analyses

The data from the Access database form were extracted using the RODBC package24 for further analysis using R25 and RStudio.26 Standard descriptive statistics (eg, means, percentages, medians, IQR) were used. Means and percentages were used for continuous or count data. Median and IQR were used for variables that were not normally distributed or categorical. Furthermore, the most common categories from the DEER taxonomy were combined with the ECM classifications in order to identify common co-occurrences.


Of the 71 Dutch hospitals that were invited to participate, 35 replied after the first invitation, and 31 (43.7%) agreed to participate. All hospital types (general hospitals, teaching hospitals and university hospitals) participated. Several hospitals submitted more than four reports. Eight of the extra reports matched the other inclusion criteria and were therefore allowed to remain in the data set for further analysis. This resulted in a total of 139 submitted reports (median=4, range=3–11). A total of 30 reports were removed because they did not meet the inclusion criteria (eg, an SAE occurring in the emergency department, an SAE that did not relate to the diagnostic process), were a duplicate report, or were incomplete or unclear. The remaining 109 reports were included for further analysis.

Safer Dx

All included SAE reports were judged a 4 or higher on the Safer Dx Instrument’s 13th item, and thus all SAE reports were considered a diagnostic error.

Generic analysis of hospital, patient and clinician characteristics

A general overview of the hospital, patient and clinician characteristics, collected through the Generic Analysis Framework,19 is shown in table 2. Patients involved in diagnostic SAEs had a median age of 65.5 years and 55% were male. Most patients initially visited the hospital for scheduled diagnostic testing (eg, radiology, laboratory; 33.0%) or an emergency care admission (32.4%). Nearly one-third (31.2%) of the patients died because of a diagnostic SAE and a large group suffered disability, either transient (23.8%) or permanent (22.9%).

Table 2

General overview of hospital and patient characteristics

SAEs occurred in 22 different medical specialties, most often occurring at general internal medicine (14.7%), cardiology (12.8%), pulmonology (11.0%) and surgery (11%). Care teams were commonly multidisciplinary (55%, n=60), most often involving surgery (11.9%), cardiology (11.0%) and neurology (9.2%) as consulted specialties. Diagnostic specialties were often involved as well, especially radiology (80.7%), clinical chemistry (33.0%) and pathology (22.0%). Care teams included medical specialists (95.4%), residents (54.1%) and nurses (40.4%). Furthermore, patients were regularly transferred between medical specialties (29.4%, n=32). These transitions of care meaningfully attributed to the occurrence of the SAE in 90.6% (n=29) of those cases.

DEER taxonomy

A total of 307 DEER taxonomy categories could be assigned to the 109 diagnostic SAEs. Results show SAEs in Dutch hospitals occurred in every phase of the diagnostic process (see figure 1), but most often in the diagnostic testing phase; 78.9% of all cases had at least one error occurring in the diagnostic testing phase. These were especially related to delayed follow-up of (abnormal) test results (33.9%), erroneous laboratory or radiology reading (22.9%) and reporting of results to clinician (18.3%). Furthermore, many SAEs occurred in the assessment phase of the diagnostic process (43.1% of all cases had at least one error occurring during assessment), especially with regard to failure/delay in considering the diagnosis (19.3%,) and too much weight on competing/coexisting diagnosis (17.4%). Lastly, many errors occurred in the follow-up phase (38.5% of all cases had at least one error related to the follow-up phase), especially with regard to delayed follow-up or rechecking of the patient (34.9%).

Figure 1

Occurrence of Diagnostic Error Evaluation and Research (DEER) categories in serious adverse events.

Contributing factors based on the ECM

SAE reports were classified according to the ECM. Results show diagnostic SAEs have several factors contributing to the occurrence of the SAE, the majority being human or patient-related factors (see figure 2).

Figure 2

Common contributing factors of diagnostic error in serious adverse events, classified according to the Eindhoven Classification Model.32 A full overview of the associated definitions is available in table 1.

Patient-related factors were present in almost three-quarters (73.4%) of the studied SAE reports. Most patient-related factors contributing to SAEs concerned atypical clinical presentation (33.9%) or comorbidity (20.2%). Other examples involve age (5.5%), failure to disclose symptoms (4.6%) and communication or language issues (2.7%).

Nearly every report had at least one human factor contributing (98.2%). The majority of these human errors related to coordination or communication between professionals (HRC, 41.3%; eg, failure to communicate critical findings), task planning and/or execution (HRI, 41.3%; eg, incorrect reading of results, errors in data entry or registration of a test result, incorrect diagnostic assessment) and knowledge (HKK, 38.5%; eg, lack of experience in assessing a diagnostic test).

Organisational factors were present in almost two-thirds of all SAE reports (63.3%). The most common organisational factor that contributed to SAEs was the quality or accessibility of protocols (OP, 32.1%). This includes unclear departmental regulations or protocols, protocols being out of date and internal protocols differing from national protocols.

Technical factors were present in 28.4% of the SAEs. They most often involved technical design (TD), which exclusively involved usability issues with the electronic health record (EHR) and was found to be a contributing factor to SAEs in 25.7% of SAE reports.

Common contributing factors (ECM) of the main DEER categories

In order to find the most common contributing factors to the most common errors, co-occurrences were calculated. Table 3 shows the most common DEER (sub)categories, the contributing factors from the ECM that co-occurred most often with them and examples of the interplay of the different causes. Patient-related factors could be linked to all most common DEER categories. These were omitted from the table for clarity.

Table 3

Most prevalent DEER categories, their most prevalent contributing factors from the ECM and relevant examples

Results in table 3 show that knowledge-based behaviour (HKK) and task planning or executing (HRI) are the most prevalent contributing factors when erroneous laboratory/radiology reading and failure/delay in considering the diagnosis occurred. Monitoring, coordination and technical design (HRM, HRC, TD) are prevalent contributing factors regarding reporting or follow-up of (abnormal) results. Lastly, protocols (OP) seem to play a role when errors in follow-up occurred.


An in-depth analysis of a total of 109 SAE reports from 31 Dutch hospitals was conducted to better understand contributing factors of diagnostic SAEs.

Results show that diagnostic SAEs occur in nearly every specialty and department and involve multidisciplinary care teams in the majority of cases. The large variability in involved specialties is congruent with other research involving diagnostic error.4 Furthermore, results show that when a patient was transferred to a different department or specialism, this transition of care contributed to the occurrence of the SAE in 90% of cases. This was often due to communication issues and incorrect or incomplete transfer of information (eg, referral letter did not reach the intended department, causing delays). These results underline the risks of transitions of care, which is in line with previous research that showed that patient handoffs during transitions of care are linked to poor patient outcomes.27 Decreasing gaps in diagnostic care during care transitions is an important way to reduce diagnostic errors, which has also been identified as a high priority by patients.28

Similar to other studies,3 10 16 results show that diagnostic errors occurred most frequently during the diagnostic testing and assessment phases of the diagnostic process. Furthermore, results of the current study show many errors in follow-up or rechecking of the patient, a category that has not been found to be this prevalent in previous studies. This might be grounded in the nature of the cases. Many cases involved patients who were in the hospital for diagnostic testing or other scheduled appointments. In the Netherlands, these types of appointments are part of hospital care, while, for example, in the USA, these appointments are often part of outpatient care in outpatient clinics, which are separate from the hospital. For some of these patients, their diagnostic process was stretched out over multiple appointments, sometimes with different clinicians. It is likely that this results in more opportunities for errors or delays in follow-up or rechecking of a patient. This is likely different from studies that use inpatient cases or cases with less room for (larger) follow-up gaps, such as cases from the emergency department.

The DEER taxonomy and ECM results were crossed to examine which contributing factors were underlying the most common errors. This showed that cases with certain DEER taxonomy subcategories often had specific contributing factors. First, factors related to knowledge (HKK) and task planning and execution (HRI) were often present when errors in clinical reasoning occurred (ie, DEER taxonomy subcategories ‘erroneous reading of diagnostic test’ and ‘considering the diagnosis’). These subcategories require high diagnostic knowledge to make correct decisions and for the necessary skills to correctly interpret diagnostic tests. Second, coordination and monitoring factors (HRC and HRM) often contributed to the occurrence of a diagnostic SAE in cases where DEER subcategories’ reporting of result to clinician, follow-up of (abnormal) test result and follow-up or rechecking of the patient most often occurred. These DEER subcategories all contain actions that involve some level of communication and/or coordination between clinicians (eg, reporting the results to the treating physician, establishing which physician is responsible for the follow-up) and have a high monitoring need (eg, checking if a follow-up appointment is made). Furthermore, they were often accompanied by technical design (TD) and protocol (OP) factors, which makes it likely that those types of errors might be facilitated by usability issues with the EHR and by missing protocols or guidelines. The distinction between these two groups of factors is insightful because it shows that diagnostic SAEs do not always occur because of errors in clinical reasoning. It shows that many errors occur because of failure in communication between clinicians and possibly deficiencies in support systems such as the EHR.

These results seem to be different from the results of a study into diagnostic SAEs from the emergency department,16 in which researchers found mainly human errors related to knowledge gaps. The reason for these differences may be found in the different workflows in the emergency department compared with other hospital departments. In the emergency department, patients are often very ill and need to be diagnosed quickly. There is often a quicker follow-up of test results, since those are needed for medical and diagnostic decision-making. Most cases in the current study involved scheduled visits (eg, for a diagnostic test) rather than emergency visits, providing opportunities for failure in follow-up of the patient or their test results caused by communication or coordination issues (eg, delayed or missed test results).

Errors related to communication and monitoring factors require different improvement measures than errors related to knowledge and skill gaps. Knowledge and skill gaps can be addressed with more exposure to and practice with a wide variety of clinical cases.29 Interventions to prevent communication and monitoring errors should focus on improving collaboration and coordination between physicians. Interventions focused on ‘closing-the-loop’ on diagnostic tests and patient handovers could be important for improving diagnostic test follow-up and monitoring errors. These interventions should be organised on a systems level, as poor (technological) systems could have amplified or exacerbate human errors (eg, unclear protocols, poorly designed EHR systems). Especially technological support systems, such as the EHR, should play a vital role in preventing communication and coordination errors.30

Lastly, ECM results showed that patient-related factors were present in almost all of diagnostic SAE cases. Patient-related factors included a wide range of factors but were most commonly related to atypical disease presentation and comorbidities. These factors have been identified as features of potential diagnostic difficulty.31 They can contribute to diagnostic error because they mask the correct diagnosis. Increasing awareness of the potential influence of atypical symptoms, comorbidities and other patient-related factors on the assessment and clinical reasoning of physicians could help improve diagnostic safety.

Strengths and limitations

This study included a wide range of SAE reports from different types of hospitals, describing diagnostic errors in a large variety of hospital specialties. The SAE reports are the result of a thorough investigation of the root causes of the SAE, performed by an experienced, multidisciplinary, independent hospital committee. The reports are checked by the Dutch Health and Youth Care Inspectorate on content and quality, which means these reports are of high quality. Due to the nature of the SAE reports, it was possible to use information from several different perspectives, allowing for a more complete and thorough analysis of the contributing factors. Especially the interviews with the involved physicians that were reported in the SAE reports were important for the analysis of diagnostic reasoning and communication. This would not have been possible using health records alone. Furthermore, SAE reports investigate the entire case. The investigation and interviews are not restricted to specific specialties or hospital departments. This helped identify communication and coordination errors between specialties and departments.

This study used a combination of instruments (Safer Dx Instrument, Generic Analysis Framework, DEER taxonomy and ECM) that have not been used before. We believe these tools are complimentary and allow for a deeper understanding of the data. Especially the combination of the DEER taxonomy and the ECM was insightful as it revealed which contributing factors were underlying frequently occurring errors. This allows for easier recognition of suitable improvement measures to certain types of errors (eg, errors in reading of a test could be improved by improving knowledge and skills, while errors in follow-up of test results should focus more on coordination and monitoring, and better protocols and technical design).

A limitation of this study is that the SAE reports were not created specifically for use in this study; no researchers were involved during the creation of the reports, and additional interviews with involved healthcare professionals or patient/family members were not possible. However, quality of the data from the SAE reports is safeguarded, as an independent hospital committee performs the investigation, and reports are validated by the Dutch Health and Youth Care Inspectorate.

Another limitation of this study is the possibility of two types of bias: hindsight bias and selection bias. The SAE investigations by the independent hospital committee are performed after the hospital suspected a medical error occurred. Because the investigations are performed in retrospect, with knowledge of the outcome(s) of the suspected error, hindsight bias could play a role.

Furthermore, this study likely has selection bias. While all hospitals in the Netherlands were invited to participate in this study, our inclusion goals were met after one round of invitations. Hospitals that did not reply after this round were not given a second chance to participate. This might have led to a selection of hospitals with a larger or more active patient safety department. Furthermore, hospitals were asked to select and send up to four recent reports (between 2018 and 2021) relating to the diagnostic process. If more than four were available, instructions were given to select the most complete ones. This could have led to selection bias.

Lastly, this study included cases with a confirmed SAE that almost always resulted in some form of patient harm. Therefore, these cases are not representative of all diagnostic errors. However, it can be assumed that factors that play a role in an SAE also play a role in diagnostic errors without clinical consequences or patient harm.13


This study shows that analysing diagnostic SAE reports allows for the identification of frequently occurring types of errors and their common contributing factors. This knowledge can contribute to improvements to enhance patient safety. Specifically, by improving communication and coordination within healthcare teams on a systems level, errors related to patient follow-up and follow-up and communication of test results can be reduced, whereas reducing errors related to clinical reasoning should be focused on closing knowledge and skill gaps. In addition, results show a possible role of the EHR in contributing to diagnostic errors, and therefore possibilities for reducing them. However, more research is needed to further specify usability issues with the EHR, its role in the diagnostic process and the effects on patient safety.

Data availability statement

No data are available.

Ethics statements

Patient consent for publication

Ethics approval

The use of the secondary anonymised reports is not subject to the Medical Research Involving Human Subjects Act (WMO) and is therefore waived for IRB approval.


We thank the participating hospitals for sharing their SAE reports. We thank Mees Baartmans for sharing the Generic Analysis Framework and for his advising role on the used instruments. We thank the members of the workgroup ‘Risk profile of the diagnostic process’ for input in the data collection and interpretation of the results: Maarten van Aken, Jurriën Reijnders, Hubert Prins, Marius van den Heuvel, Ariane Cats, Joost te Riet, Femke Verbree and Marc ten Broek. We thank Ester Rake and Lotte Houtepen for their support with the logistics and coordination of this project.


Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.


  • Twitter @JackyHooftman, @laurazwaan81

  • Contributors JH and ACD contributed equally to the paper. JH advised on the use of the Safer Dx Instrument and DEER taxonomy, carried out the analyses and drafted and revised the manuscript. ACD collected and analysed the SAE reports, carried out the statistical analyses and drafted an initial version of the manuscript. IS analysed the SAE reports and gave important feedback on the manuscript. AvdB, EP and LZ were involved in the conception of the study and collection of the SAE reports, and gave important feedback on the manuscript. AvdB and LZ are responsible for the overall content as guarantor. All authors read and approved the final version of the manuscript and agree to be accountable for all aspects of the work.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.