Article Text
Abstract
Background Patient safety incident reporting systems (PSRS) have been established for over a decade, but uncertainty remains regarding the role that they can and ought to play in quantifying healthcare-related harm and improving care.
Objective To establish international, expert consensus on the purpose of PSRS regarding monitoring and learning from incidents and developing recommendations for their future role.
Methods After a scoping review of the literature, semi-structured interviews with experts in PSRS were conducted. Based on these findings, a survey-based questionnaire was developed and subsequently completed by a larger expert panel. Using a Delphi approach, consensus was reached regarding the ideal role of PSRSs. Recommendations for best practice were devised.
Results Forty recommendations emerged from the Delphi procedure on the role and use of PSRS. Experts agreed reporting system should not be used as an epidemiological tool to monitor the rate of harm over time or to appraise the relative safety of hospitals. They agreed reporting is a valuable mechanism for identifying organisational safety needs. The benefit of a national system was clear with respect to medication error, device failures, hospital-acquired infections and never events as these problems often require solutions at a national level. Experts recommended training for senior healthcare professionals in incident investigation. Consensus recommendation was for hospitals to take responsibility for creating safety solutions locally that could be shared nationally.
Conclusions We obtained reasonable consensus among experts on aims and specifications of PSRS. This information can be used to reflect on existing and future PSRS, and their role within the wider patient safety landscape. The role of PSRS as instruments for learning needs to be elaborated and developed further internationally.
- Incident reporting
- Patient safety
- Safety culture
- Significant event analysis, critical incident review
- Health policy
Statistics from Altmetric.com
- Incident reporting
- Patient safety
- Safety culture
- Significant event analysis, critical incident review
- Health policy
Introduction
The integration of patient safety reporting systems (PSRS) in healthcare organisations stemmed from a desire to achieve the level of resilience and response to error that has been achieved in other industries, such as aviation.1 ,2 In the USA, the Aviation Safety Reporting System is central to risk management and the benefits in this context are well described.3 Seeing this, and other ‘success stories’ for reporting, in 2004 the Institute of Medicine, which was the US non-governmental advisory body (now the National Academy of Medicine), recommended the national adoption of PSRS.4 ,5 Proposed as a key method to gain understanding of patient safety risks in hospitals, PSRS now exist in healthcare systems internationally—including the Advanced Incident Management System run by the Australian Patient Safety Foundation in South Australia and the Danish Patient Safety Database.6 ,7 In the UK, the National Patient Safety Agency established the National Reporting and Learning System (NRLS) in 2003. This database has grown and currently receives >1 million reports per year in England.8 Many governments are now making PSRS a mandatory requirement for hospitals.9 ,10
The premise of PSRS in healthcare, as in other industries, is that they allow the regular recording of patient safety incidents captured by healthcare providers at the frontline of service delivery. Incident reports have the potential to provide insights into patient harm and allow the development of preventative strategies.11 The aims of PSRS were initially broad: monitoring levels of harm, identifying rare events and rapidly disseminating knowledge about high-risk processes of care. The intent was for PSRS to be ‘blame-free’, used for learning, solution generation and designed to foster a cycle of improvement.
Reporting has had relatively rapid uptake internationally. This success may reflect considerable organisational drives to improve patient safety and contribute to a learning culture among healthcare professionals. There are many examples of instances where the effective use of PSRS has enhanced safety or provided greater understanding of system weakness or failure.12–15 Encouraging staff to report and creating an environment where mistakes are treated as opportunities for learning and solution development are critical in enhancing patient safety. A key premise of reporting has been that detecting harm, especially when deemed preventable, can trigger safety initiatives and interventions, such that similar safety incidents do not reoccur in the future.16
The WHO published draft guidelines for reporting systems in 2005 and provided recommendations for the establishment of reporting systems. These included the need to clearly set out the objectives of the system as well as guidance on issues such as how to keep reports confidential and deal with serious hazards rapidly.17 Despite such guidelines, concern remains that the objectives of PSRS are not clear.18 Considerable investment and resources have been devoted to reporting—including major drives to increase the volume of reported incidents.19 Focusing on increasing reporting rates in isolation is likely to create new challenges—a significant increase in the number of reports would more than likely result in a bottleneck in which a large proportion of reports simply ‘get lost’ in the system. Thus, if the aim of PSRS is to promote learning and to improve patient safety, achieving it is compromised by the heterogeneity and volume of incidents.16 A recent interview study of safety experts by Mitchell et al suggested systems were overwhelmed by the unprecedented volume of incidents collected that were impossible to process.20
Critically for PSRS, over a decade since reporting began at large scale, it is unclear whether hospitals are indeed safer.21–23 More incidents are reported each year—however, increased reporting rates likely reflect increased awareness of patient safety incidents rather than more occasions of unsafe care. Such awareness may be enhanced for those incidents that occur more regularly and are easy to define, including patient falls and drug errors. Often, these incidents do not actually result in patient harm. Documenting the causal or contributing factors to such patient safety incidents through a ‘system-failures’ approach provides more actionable data.17 ,18 These include reports of diagnostic delay, faulty processes, communication problems and staffing shortages. Even though a vast number of reports are collected annually, if the detection of all adverse events is the overall aim of PSRS, then this aim has not been achieved. Despite increased awareness, large PSRS, such as the NRLS in England, still underestimate the incidence of adverse events, detecting only 5% of incidents leading to harm.24
With increasing demand from government bodies and the public to be more transparent about healthcare-related harm, it is important to consider whether PSRS can provide monitoring, enhance learning from errors and generate solutions—as they have achieved in other industries.
The objective of this study is to gain consensus from academic experts regarding the role of PSRS in monitoring and learning from hospital safety incidents.
Methods
A multimethod, multiphase approach was adopted to establish expert-derived recommendations on patient safety incident reporting. The Delphi consensus methodology, as described by Jones and Hunter,25 was employed and is described in detail below. Certain terms are used in this study that should be defined. ‘Incident’ refers to patient safety incident that is an event during patient care that has the potential to or does cause injury or harm to the patient.26 Incidents include ‘errors’ and ‘harm’. Errors are defined as actions or omissions that may or may not lead to patient harm, including near misses or no harm events.27 ,28 Harm refers to physical injury or complication requiring further treatment, prolonged hospital stay, morbidity or mortality as a result of the process of care delivery.29
Stage 1: literature review and expert identification
A scoping review of the literature was conducted to understand the evidence base, develop research questions relevant to PSRS and identify academic experts in the field. The following keywords were used in combinations with the Boolean terms AND and OR: patient safety (AND) reporting systems (OR) voluntary reporting (OR) incident reporting were searched in PubMed in May 2013. In addition, the reference lists of the relevant articles were hand-searched to identify any additional articles/experts. The experts identified were then screened to meet the inclusion criteria.
Experts were identified through peer-reviewed publications using a previously specified method for consensus-driven recommendation development.30 Experts were invited to participate in the interview stage of the Delphi if they met three inclusion criteria. The first requirement for inclusion was publication of over six peer-reviewed articles in English on PSRS. The second requirement was expertise in the development, management or evaluation of a reporting system. The final prerequisite was a role at a national level for patient safety. Academic experts who had published ≥3 peer-reviewed articles on reporting systems were identified and invited to participate in the expert Delphi consensus panel (stages 3 and 4). The primary researcher (A-MH) reviewed all identified articles to outline the main areas of academic debate regarding the role of PSRS. These were addressed in the second stage of the Delphi process using semi-structured interviews.
Stage 2: semi-structured interviews with incident reporting system experts
Fifteen international experts were identified and invited to participate in this stage. Of these, 14 experts (93.3%) agreed to participate. An interview topic guide was developed by a team of clinicians (A-MH, EMB, EM) and psychologists/patient safety experts (LH, NS) (see online supplementary appendix 1). One pilot interview was conducted with a member of the research team and expert in PSRS (NS) to ensure relevance and clarity. Piloting resulted in expanding the theme regarding accountability for error in healthcare. All interviews were semi-structured in nature, were conducted by A-MH (3 in person, 10 by telephone and 1 conducted via email exchange) and were recorded and transcribed verbatim. The interview topic guide was structured around three key themes derived from the literature:
What can reporting systems achieve? What are the strengths and weaknesses of using PSRS to monitor and/or learn from healthcare errors?
How can national reporting systems, such as the NRLS, be improved to maximise their utility?
What incidents should be prioritised for reporting? Who should be accountable for analysis, investigation, feedback and solutions based on reported incidents?
Supplementary appendix
Thematic analysis
All interviews were analysed thematically to identify emergent themes by the primary researcher. The thematic analysis incorporated a deductive and inductive approach; topics/themes explored in the semi-structured interviews were used as a template to guide the deductive thematic analysis;31 additional themes that emerged were identified using an inductive approach.32 A second reviewer with expertise in qualitative methodology (LH) analysed a subset (3/14) of the interviews to ensure consistency in theme extraction and reduce bias.
Stage 3: Delphi survey round 1
The themes that emerged from the expert interviews were used to inform the development of the Delphi survey. The survey was developed and piloted with patient safety experts (LH, NS, EM) to assess content and flow, as well as comprehension and clarity of questions. The survey was administered electronically via Qualtrics survey software (http://www.qualtrics.com). In total, the survey contained 58 statements, which experts were required to state their level of agreement, either using 5-point Likert scales for agreement or multiple-choice options. The survey was emailed to a wider panel of 30 experts (including the original 14 interviewees) based on their peer-reviewed publications. Two separate reminder emails were sent at two weekly intervals. Likert scores were analysed as follows: where the response was ‘agree’ or ‘strongly agree’ the response was classed as ‘agree’, whereas the responses ‘neutral’, ‘disagree’ or ‘strongly disagree’ were classed as ‘disagree’. Consensus was set a priori at 70% agreement for a statement to be included as a recommendation, as per standard Delphi method criteria.33 Experts were invited to provide free-text comments for each question in order to better understand the rationale behind their responses.
Stage 4: Delphi survey round 2
Responses to the questions received within round 1 of the Delphi were analysed and then individually fed back to each expert in round 2—alongside their own responses. For example, if an expert agreed with a statement, they would be reminded of this and shown a chart showing what percentage of the panel also agreed versus those who disagreed or were neutral. This allowed the experts to see their response as well as the responses of the rest of the panel without knowing the identity of the individuals represented. They were then asked the same question again and were able to keep their original answer or modify it. This approach allowed participating experts to review their prior responses and also to change them in light of what their peers had responded. The analysis of Likert scales was performed as described for round 1.
Results
Literature review and semi-structured interview
The 14 experts who had published ≥6 peer-reviewed publications related to PSRS and took part in this study represent an international body of experts in reporting systems across five countries (table 1). Together, they had published 90 peer-reviewed papers on PSRS at the time of the study.5–7 ,9 ,11 ,16 ,21 ,29 ,34–116 The median interview length was 41.0 min (IQR 12.8 min). Eight main themes for subsequent Delphi consensus were generated with 58 questions regarding specific issues in reporting.
In the interviews, the experts discussed a broad range of issues related to reporting. They identified the demands placed on reporting systems to provide both a learning platform and a surveillance system. They addressed the burden of numerous different types of incidents that could be reported and discussed how inclusive systems should be. There was enthusiastic discussion about ways in which to improve reporting quality and the importance of embedding reporting into the culture of hospitals, stressing that reporting should be used for system improvement and not punishment of individuals. There were mixed views regarding the optimum ways to collect data and feed back to staff. Some experts had a hospital-focused perspective, whereas others were nationally centred with respect to where reports should be collated. All experts agreed the responsibility for improvement lay with the hospitals.
From these interviews, seven topic areas for the subsequent Delphi expert review were generated as follows:
roles that reporting systems can achieve
roles reporting systems cannot fulfil
methods to maximise learning and feedback from reporting systems
the role of national and local data collection and safety solutions
voluntary versus mandatory data collection
investigation of incidents and accountability
staff training in reporting and investigating incidents.
Delphi survey
A total of 60 experts were invited to complete the Delphi survey of whom 30 agreed to participate (50% response rate). Of these, 90% had >10 years expertise in incident reporting and 87% had been involved in the development of an incident reporting system. Interestingly, the median number of published papers on reporting was higher among those experts who agreed to take part than those who did not (median=4, IQR: 3, vs median=3, IQR 1; p=0.003). Significantly more experts from English-speaking countries agreed to participate than experts from non-English-speaking countries (100% vs 86.7%, p=0.038).
After the first round, experts reached consensus (>70% agreement) on 25 statements regarding the purpose and remit of national PSRS. All 58 statements including the consensus statements were included into the second round to check for consistency of response and to increase levels of consensus.
In total, 26/30 (87%) experts completed the second round of the survey. From the 58 questions, 40 reached formal consensus and formed the expert-derived recommendations of this part of the study (table 2). A further seven statements achieved >65% consensus, thus approaching the set 70% criterion; finally, 11 statements remained unresolved with no clear agreement at the end of the Delphi (table 2).
Role of reporting systems
Experts agreed that the incidents reported to PSRS should be used to identify the type of safety problems that exist and to detect rare events not identified by other methods (96.4% and 92.3%, respectively) (box2). PSRS should ideally be used to share safety solutions between hospitals. Shared learning was identified as a primary role of reporting (92.3% agreed). Experts recommended that for selected rare types of events such as ‘never events’ (defined within the National Health Service (NHS) in England as serious incidents that are wholly preventable), mandatory reporting could be used to detect the incidence (73.1% agreed).117
Roles reporting systems cannot fulfil
The expert panel considered what were the most valid and reliable methods for measuring the rate of harm or error within a hospital or health service (box 1). The majority (80.8%) of participants agreed that prospective observation of care processes was the most robust method of six methods considered. These methods included expert, retrospective case-note review, trigger tool-based case-note review, electronic database record monitoring such as Hospital Episode Statistics (HES) or/Agency for Healthcare Quality (AHRQ) Patient Safety Indicators (PSIs),i voluntary reporting and mandatory reporting. There was substantial agreement that data obtained from PSRS are not a reliable or valid measure of the safety of a hospital or the incidence of patient harm (80.8% and 76.9%, respectively). Similarly, the expert panel approached consensus that PSRS data should not be used to identify unsafe hospitals or to identify unsafe healthcare professionals (69·0% and 65·4%, respectively).
Themed expert views expressed during semi-structured interviews
Roles that reporting systems can achieve
“It flags issues that, you know, experts can think about and interpret”.
“Incident reports are mainly for things which are surprises or unusual…not things you monitor routinely”.
Roles reporting systems cannot fulfil
“Reporting systems are not a valid source for detection or delineation of incidence or prevalence of events, they just tell you they are occurring…it's the starting point”.
Methods to maximize learning and feedback from reporting systems
“Definitely collect less data of better quality”.
“Share learning horizontally, to create a structure for peer learning but also for vertical accountability”.
“Feedback from incidents is best handled locally because if there are changes to be made then that's where they should happen and the further you are away both geographically and conceptually from where the incident occurred the less engaged you feel and the harder it is to feed back”.
The role of national and local data collection and safety solutions
“The reporting system should be as close as possible to the care unit itself to optimize learning”.
“Change the form so it said ‘does this have national implications this incident, if so please state why and what you believe needs to be done at national level’”.
“People at the coalface do not have the perspective by and large to know what has national impact”.
Voluntary versus mandatory data collection
“I think focusing on certain events would be useful as there are a handful of events that a nation or a region could decide are particularly memorable to incident reporting and that there is no other way of getting that information”.
“Distinguish between reporting that you want to use for learning and…for accountability…they need to be absolutely firewalled and separate”.
Investigation of incidents and accountability and staff training
“A very top down system…creates a level of learned helplessness on the part of the frontline clinicians that I think is destructive”.
“Have the colleges develop education programmes especially for the knowledge deficit or skill deficit in reporting and analysis”.
“People are more likely to report things if they feel that something's going to happen as a result, that people within their tribe are the ones who are assessing the incidents”.
Methods to maximise learning from reporting systems
Ten recommendations were made by the expert panel to improve patient safety incident data capture and maximise the potential for learning from reported patient safety incidents (box 2). These recommendations included the importance of standardising and linking data sets (84.6% and 73.1% agreed, respectively), educating staff on national priorities for reporting and educating staff that the quality of reports rather than quantity was most useful for learning (77.0% agreed). The importance of ensuring the anonymity of the reporter was emphasised (73.1% agreed) and sharing data and using reports in educational programmes was recommended (92.3% favoured this statement). It was agreed that the greatest value of reporting was obtaining solutions to errors from frontline staff (84.6%).
What can reporting systems achieve and what should they not be used for?
Role for patient safety reporting systems
Identifying safety issues
Detecting rare events
Sharing safety solutions
Monitoring ‘never events’
Avoid using reporting systems to
Measure how safe one hospital is compared with another
Identify unsafe healthcare professionals
Identify unsafe hospitals
Measure the incidence of harm in a health system
The role of national and local data collection and safety solutions
There was consensus as to what types of events should be collected at a national level (box 3). Incidents with the potential to be solved nationally such as device failures (88.0% agreed), never events or serious untoward incidents (88.0% agreed), hospital-acquired infections (80.8% agreed) and medication incidents (76.0% agreed) were examples of incidents the panel recommended to be reported and analysed locally and nationally. In contrast, issues such as staffing problems were more relevant locally (72.0% agreed). Of greater importance was the concept that initiatives to prevent harm and safety solutions should be generated locally and fed nationally rather than the reverse top-down approach (88.0%).
Maximising learning and improving accountability
Protect and educate staff
Keep reporting anonymous
Share data and results with staff and other academics
Standardise and link data sets so the same data is captured in each hospital
Educate and train staff to report
Prioritise specific events for reporting and make staff aware
Local reporting national learning
Device failures, never events and hospital-acquired infections are useful at a national level
Staffing issues and no harm/low harm events useful at a local level
Solutions should be generated locally and shared nationally
Hospital responsibility for solving safety problems
Executive board member for patient safety
Feedback to staff should be hospital priority
Hospitals take responsibility for investigating own reports and generating preventative action
Voluntary versus mandatory data capture
The panel recommended that ‘never events’ or serious events such as wrong site surgery (92.0% agreed), device failures (80.8% agreed) and hospital-acquired infections (77.0% agreed) should be mandatory incidents for reporting. 53.8% of experts agreed that staff shortages and risk assessments should be captured by a voluntary system.
Investigation of incidents and accountability
All experts recommended (ie, 100%) that hospitals should have an executive board member responsible for patient safety and that hospitals should be accountable for investigating their own reports (84.6% agreed) (box 2). Experts agreed that hospitals should not determine their own reporting priorities (84.6% agreed). Consensus was reached that never events and incidents leading to death and severe harm should be prioritised for investigation (73.1% agreed). Individual feedback after investigation should be provided to reporters (84.6%). However, there was lack of consensus regarding who should provide feedback to reporters.
Staff training in reporting and investigating incidents
The expert panel agreed that the value of reporting systems would increase if staff were better trained to identify and report safety incidents (80.8%). It was recommended that senior nurses, doctors and other healthcare professionals be trained to investigate incidents (73.1% agreed).
Discussion
This study reached some key conclusions regarding the role of PSRS in healthcare. Of primary importance is the consensus that PSRS cannot, and should not, be used to monitor the incidence of harm in hospitals.118 ,119 This has important implications for governing bodies wishing to identify ‘unsafe’ hospitals and using incident reporting data to do so. Previous research also found no evidence to suggest that higher reporting rates are associated with higher mortality ratios or data collected from other safety-related reporting systems.120 Further, there was agreement in our expert panel that reporting rates reflect the safety culture of an institution, in keeping with other literature.28 All experts in this study agreed that other data collection methods (such as prospective observation) are superior to incident reporting data to monitor the rate of incidents and the safety of practices. However, there was agreement that for rare and serious events such as wrong site surgery, where it is mandatory to report, reporting systems may be useful for monitoring frequency of incidents. Given that the denominators of incidents are unknown, it is difficult to speculate what the sensitivity of PSRS for detecting reported ‘never events’ might be.59 Nevertheless, for rare incidents, prospective observational methods or even retrospective methods would require significant resources to record infrequent events and reporting is likely to be the most feasible option.107
Other than the exceptions above, experts recommended that reporting systems should be used to describe the types of safety issues rather than the rate of incidents in organisations. An example of this would be incident reports concerning delayed diagnosis. It has been suggested that regardless of whether a hospital reports 1 delayed diagnosis per month or 10, such reports indicate that diagnostic delay is a safety problem requiring further investigation.85 Reports act as a signal of underlying problems.119 This view may explain why experts in this study agreed that the quality of reports was more important than their quantity.121 Along with this theme was the consensus that certain events were more useful locally than nationally, such as staff shortages. Limiting the volume of national reports by specifying incidents of national interest while enabling local hospitals to continue collecting data, rather than feeding all reports nationally, may allow national bodies to focus on current safety priorities and be selective about national resource allocation.119 This has advantage for reporters: reducing the burden of reporting events that provide limited learning can reduce resource wastage and frustration.122 Events such as patient falls, which comprise the bulk of the NRLS data, for example, may not need to be collected at a national level—as they are well understood and their prevention well evidenced.
The panel recommended that learning from error should be the main aim of reporting systems. Enabling staff to propose solutions and training them to take responsibility for investigating and understanding system failures was deemed to be important. Exactly how staff should be trained and in what methods of incident analysis and investigation, such as root cause analysis (RCA), was not explored in the current study. Research conducted by Bowie et al found that NHS healthcare workers trained in RCA do not necessarily participate in incident investigations and that current training in RCA needs to be improved.123 Whether it is feasible for all staff to be trained or whether this should be for selected staff is debatable, which may explain why experts only reached agreement that senior staff members should be trained in analysis. Training in event investigation was beyond the remit of this study and requires further attention. Consensus was reached that staff should be trained on how and what to report; this has been shown to increase the number and also, importantly, the quality of reports.6 ,53
To improve shared learning between hospitals, consistent, minimum data sets were recommended. Anonymity of reporters was emphasised; a recommendation that concurs with that of the draft WHO guidelines and other studies. Anonymity is critical for protecting and enabling reporters to share experiences without fear of recrimination.124 ,125 The issue of mandatory versus voluntary reporting was also explored as it has been a subject of much debate in the literature.126 ,127 The lack of any valid method for governing how well hospitals comply with reporting means that although certain types of report are mandated, compliance cannot be easily assessed. The expert panel suggested reporting specific, important and clearly defined events such as wrong site surgery, hospital-acquired infections and device failures on a mandatory basis. Reinforcing that reporters must report certain events may increase general awareness of their importance and thus help embed reporting as a cultural norm. Experts considered whether hospitals should determine their own reporting priorities. There was widespread disagreement with this approach; experts recommended instead that reporting priorities of national interest for mandatory monitoring purposes should be determined, and healthcare professionals should be made aware of what these are.
Feedback is an essential element of a reporting system. The importance of feedback is widely recognised in the literature as an integral part of learning from error.86 ,124 ,128 The expert panel reemphasised this view. It was suggested that hospitals should make giving feedback their own responsibility rather than relying on a national process and that feedback should be specific. Considering the large volume of reported incidents, experts recommended that feedback to staff that reported incidents of death or severe harm should be prioritised, while recognising that feedback for all types of incidents are valuable for learning. Hospitals should focus on generating solutions to their own safety problems that can then be shared nationally. Creating a ‘solution centre’ or national repository of safety ideas would enable this process. Hospitals should take responsibility for feedback of solutions to errors to their own staff who report and be accountable for learning from reporting.
All experts recommended that hospitals should have an executive board member responsible for patient safety. This recommendation will enable hospitals to focus on and demonstrate that learning from harm is a top priority. Botje and colleagues showed that having quality as an item on the executive board agenda increased efforts to improve standards.129 Although safety is everybody's responsibility, having an executive directly accountable for the issue allows safety to be placed firmly on the executive board's agenda.129 Healthcare quality comprises three strands: clinical outcomes, patient experience and patient safety.130 We suggest that appointing safety champions to actively engage staff and allocate resources for safety is vital to ensure safety is not overlooked.20 ,131
Topics that failed to reach consensus
There were some areas where opinion remained divided or no strong conclusions were reached (table 2)—which are important to consider. Experts rejected the idea of setting strict criteria for what to report, although they agreed that there should be mandatory reporting for specific never events and serious untoward events. This may be to preserve the freedom that reporters have to report unusual and new types of events. Although not reaching consensus, >65% of experts agreed that reporting systems should not be used to identify unsafe hospitals or healthcare professionals. The difficulty in ruling out reporting systems as a tool for monitoring dangerous practices is possibly due to the lack of other methods able to do so. However, consensus was reached that reporting systems were unsuitable instruments for measuring how safe a hospital is. On the same theme, there was indecision regarding where reports of healthcare professional misconduct should be reported. The experts did not agree to the inclusion of consultant/attending names or national identifying numbers in data collection. This may have been to ensure the protection of confidentiality and maintenance of anonymity. There was also no agreement that morbidity and mortality conference outcomes should be reported nationally. Although experts were keen for reports of severe patient harm to be reported nationally, the suggestion for clinical teams external to the hospital being employed to investigate severe harm reports was not approved.
Limitations
This study has recognised limitations. Delphi consensus groups can produce collective answers but this does not always mean the consensus is valid. Since there is uncertainty regarding the utility of reporting systems, a Delphi method was deemed appropriate to draw together a wealth of expertise to address the question. Consensus methods should not be a substitute for rigorous, prospective studies, but conducting such studies to answer the questions posed in this Delphi would not be feasible. The study was further limited by potential selection bias as only 50% of experts invited to participate in the second stage agreed to do so. We feel that this selection bias is somewhat mitigated as the experts that participated were all internationally renowned with >10 years experience with reporting systems and numerous peer-reviewed publications on the topic. The second round of the Delphi had an acceptable response rate of 86.7%, thereby reducing attrition bias. This study was also limited as the experts interviewed only represented a small number of countries. This may be due to our inclusion criteria extending to articles in English only—indeed, there were more experts from English-speaking countries in the respondent group. Future work should seek to understand how a wider group of countries report patient safety incidents and their views on reporting. Finally, the Delphi was limited by not seeking the views of frontline healthcare workers. Such views were represented partly as many of the experts were practising clinicians—though engaging with stakeholders other than patient safety academics was outside the remit of this study. Future studies should certainly aim to capture the views of healthcare workers, patients and other stakeholders, regarding the future of reporting systems.
Conclusions
The study has produced international expert consensus-based recommendations regarding optimal application of reporting systems. To the best of our knowledge, this is the first time such an exercise has been carried out. These recommendations can now be used to enable systems to evolve and be further developed. More research is required to maximise learning from error, through reporting.
References
Footnotes
Contributors All authors made substantial contributions to this research paper in accordance with the ICMJE requirements. AD and NS and EM conceived the research idea and all authors contributed to the design. A-MH, LH and EMB analysed and interpreted the data. All authors, drafted, revised and approved the article prior to submission. AD is the guarantor.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
↵i HES data are the routinely collected hospital UK electronic data records from which adverse events can be derived. AHRQ PSIs are US indicators taken from routinely collected hospital data.