Objective To evaluate the feasibility of a locally implemented incident-reporting procedure (IRP) in primary healthcare centres after 1 year.
Setting and participants Five primary healthcare centres caring for more than 43 000 patients in The Netherlands. GPs, medical nurses, physiotherapists, pharmacists, pharmacist assistants and trainees reported incidents (a total of 117 employees).
Methods An IRP was implemented in which participants were encouraged to report all incidents. In addition, dedicated ‘reporting weeks’ were introduced that emphasised reporting of minor incidents and near misses. In every centre, an IRP committee analysed the reported incidents in order to initiate improvements when necessary.
Outcome measures Frequency and nature of reported incidents, number of incidents analysed by the IRP committees and number of improvements implemented. In addition, the authors studied the actual implementation of the IRP and the acceptability as experienced by participants.
Results A total of 476 incidents were reported during a 9-month reporting period. Of all incidents, 62% were reported in a reporting week, and most were process-related. Possible harm for patients was none or small in 87% of the reported incidents. IRP committees analysed 84 incidents and found 230 root causes. All participating centres had initiated improvement projects as a result of reported incidents. Most interviewees considered the IRP feasible, but several practical, professional and personal barriers to implementation of the IRP were identified.
Conclusion The implementation of a centre-based IRP in primary care is feasible. Reporting weeks enhance the willingness to report.
- Primary care
- incident reporting
- patient safety
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
As primary healthcare is growing both in size and complexity, awareness about patient safety and monitoring quality of care are urgently needed. Institution of large-scale out-of-hours services, extra professionals such as practice nurses and increasing part-time employment all increase the risk for system failures.
Incident reporting is an important first step in the development of a safety management system. International data indicate that incident reporting by primary care physicians is uncommon. In the UK, the National Patient Safety Agency gathers >3000 incident reports per day from healthcare providers, but only 0.4% of incidents are reported by GPs.1 Studies addressing incident reporting in general practice were typically described causes and type of error2–8 or explored the feasibility of centralised reporting procedures.2 9–15
Reporting to external databases may increase safety by aggregating data on rare incidents, disseminating relevant lessons, and revealing trends and hazards.16 In contrast, local small-scale reporting systems might allow timely responses to incidents and initiate improvements that are more tailored to the local situation. There is opportunity for detailed analysis, and ‘feedback loops’ are shorter. In hospitals, promising results with such locally implemented incident-reporting procedures (IRPs) were reported,17 but to our knowledge, no studies in primary healthcare have yet been conducted.
We explored the feasibility of a local IRP as a tool for safety management and organisational learning in five primary healthcare centres. ‘SPIEGEL’ (Dutch for ‘mirror’) is an acronym for Study on Patient Safety by Incident Evaluation in Primary Care.
The study was a prospective, observational study of a voluntary, confidential IRP, implemented in five Dutch GP healthcare centres. Three centres affiliated to the University Medical Center Utrecht were approached by the researcher, and two practices volunteered. Qualitative and quantitative methods were used to evaluate the feasibility of IRP.18 19
Feasibility was defined as: (1) number and nature of incidents reported, (2) actual implementation of the IRP and (3) acceptability as perceived by participants. Quantitative outcomes were number and type of incidents, root causes, type of reporter, patient harm, proportion of incidents analysed and number of implemented improvements. Qualitative outcomes were ‘acceptability’ as experienced by reporters and observed implementation of the IRP.
All care givers of the five participating centres were asked to report incidents. An incident was defined as any unintended or unexpected event which could have led or did lead to harm for one or more patients receiving care.1 In addition, three times per year participants were encouraged to report all minor incidents and near misses during dedicated ‘reporting weeks.’ ‘Reporting weeks’ were selected by spreading equally over the year and following pragmatic planning criteria, such as no overlap with holidays or other important events in the practices. In the week preceding the reporting week, the study team specifically reminded the practices of the coming reporting week by email and provided advertising material, such as posters and fliers. During the reporting week, there was no specific action of the study team.
In every centre, a multidisciplinary IRP committee was trained to screen and analyse the incident reports. Incidents were selected for analysis by assigning a risk score (0–4), based on an estimate of potential harm and frequency of occurrence. Committees were advised to analyse incidents with a risk score of 2 or higher, based on PRISMA22 and Root Cause Analysis23 techniques. They were also responsible for developing improvement measures. Management was responsible for the actual implementation of these measures (for details on the IRP, see supplement 1).
Each incident was briefly described on a paper reporting form, which also contained closed questions about date, time, place, circumstances, staff and patient involved. A screening form was used by the IRP committees to decide on further steps in the reporting and learning cycle.
To explore the actual implementation of the IRP, two authors (AS, DZ) observed the five IRP committees at work. The researchers scored whether key elements of the IRP were actually carried out. Acceptability was explored in healthcare centres 1, 2 and 3, by semistructured group interviews with the three IRP committees (n=11) and by semistructured interviews with 15 individual employees, purposefully sampled from different disciplines and having different attitudes towards IRP. In healthcare centre 4 and 5, no interviews were conducted, for they just started reporting at the time the interviews were held. The interview consisted of questions about perceived practicality, reasons to follow or not to follow the IRP, time investment, feasibility of the IRP and its influence on patient safety and suggestions for improvement. Finally, during the ‘network meeting’ (supplement 1), all IRP committee members and managers of the participating centres commented on the data. These comments were also used to evaluate implementation and acceptability. The content of the interviews and of the network meeting was manually recorded with contemporaneous notes by the interviewer and during the network meeting by assigned research team members.
The incident reporting forms were anonymised and aggregated in a database. All incidents collected were categorised by the researchers (DZ, AS), using classifications from the literature,4 15 adapted for practical use in the Dutch situation. SPSS V.15 for Windows was used for frequencies. Qualitative data were analysed by constant comparison.24 Discordant judgements were resolved by consensus discussion between two researchers (DZ, AS).
Figure 2 shows characteristics of the centres. Before the implementation of the IRP, the usual response to incidents involved action by individual physicians, mainly after severe incidents. There was no common IRP. Only the pharmacists in centre 4 had already started an internal IRP.
Number and nature of incidents reported
A total of 476 incidents were reported during the 9-month reporting period (figure 3).
Most incidents were reported during reporting weeks: 293 (62%). One hundred and sixty-two incidents (34%) were reported in between reporting weeks, and no reporting date was available for 21 (4%) incidents. Incidents with no potential harm for patients (risk score 0) were reported twice as often in a reporting week than outside a reporting week compared with incidents with risk score 1 and higher. Two-thirds of the incidents with risk scores 3 and 4 were reported in a reporting week. On average, 4.1 incidents were reported per employee (6.7 incident per full time equivalent). Most reports came from medical nurses (45%) and GPs (33%).
All 476 incidents were screened by the IRP committees. Table 1 shows that 391 (82.3%) incidents were categorised as process incidents, 33 (6.9 %) were technical, 31 (4.4%) incidents were classified as communication-related, and 19 (4%) were considered as knowledge and skills incidents.
The administrative reports were mainly related to making or changing appointments with patients (74 of 127 administrative incidents). Therapeutic process incidents were mostly related to medication prescription (128 out of 149). The relatively high number of medication incidents is partly the consequence of a period of focused reporting on prescription problems in one of the centres and by a focus on the pharmacy in another centre.
Harm is described in table 2. On 423 reporting forms (89%), reporters noted the harm for patients at the moment of reporting. Both catastrophic incidents were due to insufficient triage, leading to ICU admission of one patient and death of another patient. The IRP committees estimated potential harm for all incidents, by estimating what could have happened in a particular incident.
Eighty-four reports (18%) were analysed in depth by the local IRP committee. Sixty-five of the 216 incidents with risk score 2 were analysed, nine of the 13 incidents with risk score ‘3’ and all (3 out of 3) with risk score ‘4’. Also, seven incidents with a risk score <2 were analysed. The analysis of these 84 incidents identified 230 root causes: 34% ‘human’, 38% ‘organisational’, 14%‘patient related factors’ and 14% ‘technical factors.’
All centres had initiated improvement projects as a result of reported incidents. For 55 reports (12% of all reported incidents), improvements were completely implemented within the study period by three of the five centres. Examples of such measures were: improving diagnostic protocols for cystitis, redefining tasks in triage, rearranging storage of ‘look-alike’ drugs, adjusting the medication control system and adjusting appointment management to decrease errors and waiting time.
Observed implementation and acceptability
Direct observation of the IRP committee meetings showed that all centres followed the IRP at least up to the screening phase. The analysis phase, however, was often not executed according to the preagreed procedure. No centre was able to analyse all reported incidents with risk score 2 and higher. Instead, the committee often decided to analyse only incidents that appeared to cover major topics, based on frequency of occurrence or on outcome severity.
The interviews (see Supplement 2 for quotes) confirmed that incident analysis was perceived as a difficult step and therefore often not performed as proposed in the procedure. Time constraints, feelings of inexperience, questions about validity of the analysis techniques and hesitation to interview involved care givers made analysis difficult. The time to complete analysis of a single incident ranged from 1 to 8 h. The combination of both screening and analysis phases varied between 2 and 10 h after the first reporting week and between 1.5 and 5.5 h after the second and third week. During the network meeting, managers noted that incident analysis in general practice is labour-intensive, and there should be financial support to create time for analysis.
One third of the interviewees mentioned it was difficult to maintain the level of awareness for reporting incidents, but reporting weeks were perceived as very helpful.
About two-thirds of the interviewees mentioned that they had encountered more incidents than they had reported. Time constraints, but also feelings of guilt or shame, not feeling safe and doubts about the usefulness of reporting were mentioned as explanations for under-reporting.
Finally, 80% of respondents believed that the IRP was feasible in daily practice, while 75% thought the IRP also enhanced patient safety. Many respondents mentioned that reporting led to more patient satisfaction and better service. However, 25% of the interviewees, including one entire IRP committee, were sceptical and did not believe that IRP improved patient safety. They argued that ‘primary care already was very safe,’ that care givers mainly report minor incidents that are of little consequence to the patient, that patients themselves played an important role in safety—which could not be influenced by the IRP—and that management was unable or unwilling to realise suggested improvements.
In this study, we found that implementing a local IRP yielded numerous incident reports in primary healthcare centres, of which a substantial number resulted in actual improvements in daily practice. The IRP was acceptable for most participants, and many thought that it enhanced patient safety, suggesting that a local IRP, incorporating reporting, screening, analysis, improving and learning, is feasible. However, several barriers, not only to the reporting per se but also to the analysis in local IRP committees, were discovered.
To our knowledge, this is the first study prospectively monitoring the implementation of a local IRP in general practice, including frequency and nature of reported incidents and acceptability. We also monitored the actual implementation process and tracked local improvement measures. We anticipated that care givers would be unwilling to keep reporting frequent minor incidents. Reporting concentrated in a short period of time generates more incident reports,12 15 21 including near misses, as compared with reporting continuously for longer periods.6 10 11 25 Therefore, we asked participants to report all minor incidents during dedicated ‘reporting weeks.’
We expected that low-risk score incidents would be reported more often during reporting weeks. Conversely, we estimated that incidents with high-risk scores—which are rarer—would be reported equally often during regular practice and reporting weeks. However, ‘incident awareness’ during reporting weeks increased reporting in general, not only the reporting of minor incidents and near misses. This clearly indicates that outside the reporting weeks, under-reporting of all types of incidents is likely.
The number of incident reports in our study was higher than in other recent studies.14 26 This is possibly related to the training of potential reporters in the preimplementation phase. Another possible explanation is that participants in our study were more willing to report because their reports remained within their practice.
An important question is whether the nature of the reported incidents differs between local and national databases. In agreement with other studies,4 9 12 14 15 26 27 the majority of the reported incidents in our study were process errors (82%). However, only a formal comparison of the types of incidents reported to national databases versus incidents reported locally can reveal such differences. This was beyond the scope of the present study.
Some factors may explain why knowledge and skills-based errors were under-reported in our study. Most participants had no prior experience with any kind of incident reporting. Moreover, reporting a knowledge or skills-based incident can be very sensitive for the involved professional. Anonymous reporting might have partly overcome under-reporting. We chose to report confidentially instead of anonymously, because analysis of incident reports together with the involved professional is more likely to produce valuable information for improvement measures.6 In addition, our qualitative data indicated another explanation for the lack of reported incidents in the ‘knowledge and skills’ category.2 Most GPs consider mistakes in the clinical process an integral part of their professional life in which they have to deal with diagnostic uncertainty. They often consider knowledge and skills related incidents a personal mistake, in which no system causes can be identified. Sometimes they do not even consider the error an incident at all. Therefore, some argue, there is no point in reporting and analysing the incident. This problem of ambiguity28 concerning risks in general practice, in which there is no consensus about the risk itself, makes reporting of incidents related to lack of knowledge or skills difficult.
Our study had some limitations. The number of participating practices was relatively small, and each was willing to participate in the project. This may have resulted in overestimation of the feasibility of a local IRP. Although the practices might not represent the ‘average GP practice,’ reasons for participation differed clearly between centres. This allowed us to uncover several important feasibility issues. Furthermore, preimplementation data on these centres did not show any difference in the way incidents were handled before our study. Finally, the time frame of this study was only 1 year. It is possible that the effects of implementation of the IRP and some of the problems and barriers observed in this study are due to this relatively short time frame. However, as we chose to implement the IRP from a participative approach,19 it is highly relevant to document these early problems, because they will have a large impact on the feasibility of such a new procedure.
The centres were unable to analyse all incident reports according to the pre agreed IRP, often because participants thought it was too time-consuming or complicated. A centralised IRP with a national database and professional analysers might solve this problem, but as discussed earlier, such an external IRP results in considerably less incident reports and could be less effective in generating local improvement efforts. In addition, Iedema et al29 suggested that not the formal outcome of the incident analysis, but the process of incident analysis itself leads to improvements in the working environment. This view questions the utility of focussing efforts on building national databases containing all incidents and root causes, and supports the concept of reporting and analysis in the working environment of the care givers. However, certain centralised activities, such as supporting local incident analyses by experienced professionals, might enhance the quality of the analysis as well as reduce the workload of local healthcare professionals.
In conclusion, our study indicates that a locally implemented IRP as a tool for managing patient safety in general practice is feasible. Dedicated ‘reporting weeks’ enhance the readiness to report. However, the incident reports do not cover the entire spectrum of possible types of incidents, and the most vulnerable aspect of a local IRP is the analysis phase. Both professional support for the analysis and a more selective approach (‘less is more’) could help in developing a balanced IRP for general practice. Emphasis might better be less on collecting all incidents that occur in GP practice. Instead, local reporting of incidents and near misses in some dedicated reporting weeks per year, if desired focused on known risky processes, together with developing additional tools to uncover and discuss the knowledge and skills-related incidents could be a more efficient road to safety management in primary care.
Funding SBOH, Dutch financer for GP vocational training institutes and GP trainees.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.