With great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
In 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for...
With great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
In 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for improvement (n=95 suggestions given by 68 respondents) were given. The oral survey provided more often suggestions for improvement compared with the app (50 vs. 18 respondents). Suggestions for improvement were more frequently made by relatives compared with patients (57 suggestions given by 37 relatives vs. 38 suggestions given by 31 patients). All improvement suggestions were classified to one of six categories: ‘Surroundings’ 48/95 (51%), ‘Information, communication and education’ 23/95 (24%), ‘Patient care’ 15/95 (16%), ‘Attitude, handling and relation caregiver with patient/relatives 7/95 (7%), ‘Emotional support’ 1/95 (1%) and ‘Care for relatives’ 1/95 (1%).
This simple study showed that an oral survey results in more suggestions for improvement than an app. The lack of complexity of the survey resulted in very specific, useful and practical suggestions for improvement, which were easily transformed into clear recommendations, such as: “respect sufficient rest of our patients” or “don’t forget to provide food to the patients who are able to eat”. The survey can easily be repeated in the course of time. These results may give a new perspective on how to conduct feedback studies.
The key suggestions for improvement found in this study were presented to the department in the form of a coat rack, which was an improvement option frequently mentioned by relatives (A coat rack was missing in one of our family rooms). This coat rack will be hung in central places in our department. On this coatrack recommendations based on the most important improvement suggestions will be hung. We think this is one example of a simple, but practical solution to make feedback more useful: every month the recommendations will be replaced by new ones, reminding all caregivers in our department of the feedback given by our patients and their relatives and thereby striving to improve our care.
We are well aware of the fact that the surveys used in the studies described in the article of Flott et al1 are much larger and more complex than the one we used in our study. We just wanted to show that a learning point could be: don’t overcomplicate.
References
1. Flott KM, Graham C, Darzi A, Mayer E. Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Qual Saf 2017;26:502-507.
2. Jensen HI, Gerritsen RT, Koopmans M, Zijlstra JG, Randall Curtis J, Ording H. Families’ experiences of intensive care unit quality of care: Development and validation of a European questionnaire (euroQ2). Journal of Critical Care 2015;30(5):884-890.
This study uses rigorous analysis to obtain important insights about the realtime information that our patients are handed at discharge. It is puzzling that the EMRs used were not named. One can infer from a look through the MSU website that they have both Cerner and Epic, but why is that necessary? The heart of quality/safety work is one of transparency balanced by humility, i.e. we shouldn't expect our IT systems to be any more perfect than we are, but they won't improve if we don't have more openness. The lack of scientific foundations and published post-marketing surveillance for our EHRs, especially the ascendant ones, was initially surprising. However, as they achieve complete market dominance, with less overt scientific review and public guidance and commentary, the silence is deafening. Is the BMJQS's failure to simply identify the names (or maybe I missed the citations) an oversight, or part of nondisclosure agreements with the vendors at the MSU institutions or at BMJQS?
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
With binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though...
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
With binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though it may be representative of few or none of the monthly data values. This makes the control limits meaningless. A probable way round this is to employ confidence limits for the monthly rates. Viewed as a likelihood supported range this enables the extent of each of the monthly counts or rates to be assessed. If a GAM analysis is added to this the predicted rate and its confidence limits can also be obtained throughout the time series.2
This approach is more in keeping with the complexity of the processes responsible for the AE than is standard SPC that was not designed to deal with complex systems.
As an aside, it is worth noting that some swamps may be valuable ecosystems. This popular analogy is thus a poor one. Like root-cause analysis it belongs to the area of simple/complicated systems, not complex ones.
1. Morton, A., Whitby, M., Tierney, N., Sibanda, N. and Mengersen, K. 2016. Statistical Methods for Hospital Monitoring. Wiley StatsRef: Statistics Reference Online. 1–8.
2. Morton A, Mengersen K, Whitby M. and Playford G. Statistical Methods for Hospital Monitoring with R. Chichester John Wiley and Sons 2013.
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
AHRQ’s collaboration with researchers in the Palo Alto Medical Foundation (PAMF) Research Institute provides a powerful example of how partnership between fully embedded researchers and external funding agencies contributes to health system learning. AHRQ partnered with Kaiser Permanente and PAMF researchers to study implementation of a Lean-based redesign to improve care delivery efficiency in PAMF’s primary care clinics. (2) Applying Lean analysis techniques, PAMF discovered inefficiencies in a pilot primary care clinic and redesigned work roles and work flow to enhance coordination among physicians and to better support them. Key changes included:
• New roles for medical assistants as a “flow managers,” facilitating physician’s work and performing administrative tasks like handling email that previously burdened physicians
• New workflows – including daily huddles for scheduling; agenda setting during patient visits
• Co-location of physician-medical assistant teams in a shared workspace.
PAMF then tested these new roles and processes in three additional clinics, assessed the improvements’ effects, and rolled the changes out to 13 additional clinics.
PAMF researchers interviewed staff to uncover factors influencing successful implementation of these changes and system requirements for successful redesign of care. (3-4) To assess changes in efficiency, they analyzed rich and timely internal data sources such as:
• Physician efficiency metrics derived from PAMF’s time-stamped EHR data and other operational sources
• PAMF’s routine patient and personnel surveys
• Standardized quality metrics that PAMF reports.
Their research showed that PAMF’s primary care redesigns boosted efficiency without sacrificing quality and satisfaction. (5) AHRQ and PAMF disseminated these valuable findings widely through practice –oriented briefs, conference presentations, and webinars, as well as in peer-reviewed papers.
PAMF’s fully embedded researchers promoted internal learning by tracking progress and outcomes of the Lean improvement efforts and providing feedback to their system’s leaders and staff. AHRQ and the PAMF researchers promoted system-wide learning about Lean-based primary care redesign by broadly disseminating the study’s findings and implementation lessons.
3. Hung D, Gray C, Martinez M, Schmittdiel J, Harrison, MI. Acceptance of Lean redesigns in primary care: a contextual analysis. Health Care Manage Rev 2017; 42:203-212.
4. Gray C, Harrison MI, Hung D. Medical assistants as flow managers in primary care: challenges and recommendations. J. Healthc Manag 2016; 61:181-191.
5. Hung D, Harrison MI, Martinez M, Luft H. Scaling Lean in primary care: impacts on system performance. Am J Manag Care 2017; 23(3):161-168.
I read with interest the article by Peerally et al (1) on 'The
problem with root cause analysis'. I reflected on the recent cases that
happened at Royal North Shore Hospital and Sydney Hospital (2,3,4) which
led me to consider which investigative tool is best applied to different
incidences and identified risks.
The use of appropriate tools and involvement of key stakeholders are
crucial elements to a successful investig...
I read with interest the article by Peerally et al (1) on 'The
problem with root cause analysis'. I reflected on the recent cases that
happened at Royal North Shore Hospital and Sydney Hospital (2,3,4) which
led me to consider which investigative tool is best applied to different
incidences and identified risks.
The use of appropriate tools and involvement of key stakeholders are
crucial elements to a successful investigative process and outcomes,
however, we cannot ignore the reality of the process cost versus event
severity and risk.
Use of tools by subject matter expert
Root cause analysis (RCA) is a tool used in many investigative incidences
(5,6). Often as a result recommendations are made yet similar errors still
happen. As correctly mentioned by Peerally et al, most investigations of
incidences are done by the local team involved with RCA tools but with a
lack of expert accident investigator involvement to ensure regular
feedback loops and ongoing corrective actions.
I do agree that hospitals should move toward proactively preventing
adverse incidences for high probability, high severity risks. Preventing
adverse incidences can eliminate harm to patients, reduce liability for
organisations and reduce both operating costs and the need for resources.
A proactive approach often uses Failure Mode Effect Analysis (FMEA) tools.
FMEA often requires a higher level of investigative expertise and as such
often costs more so it may be optimal to assess risks on a probability
severity matrix to identify which tools are optimal.
The proposal of engaging an independent professional body, while
preferable, can be time-consuming and expensive. I propose for most cases
(with exception for cases with significant legal liability) this level of
expertise and independence could be developed within the organisation. The
body i.e. quality or risk management department, should comprise of people
with qualifications such as system thinking, sound interviewing
techniques, able to involve staff, human factor analysis, current clinical
practice, health management and have the ability to analyse data (7). This
department could then act as a quasi-independent body to avoid situational
bias and provide a platform for disseminating the results to intra-
hospitals, inter-hospitals and governmental bodies as shared learning to
help prevent occurrence or recurrence. As a largely independent department
within the organisation, they can for most cases facilitate the
investigative processes objectively thus eliminating tendency to blame
(8,9,10).
Key stake holders' involvement
The involvement of key stakeholders is very crucial in any investigative
process; leaders, managers, clinicians.
The leaders provide governance, leadership and support to the managers.
They are involved in the investigative process to gain their input,
consensus and to commit resources for any recommendations that might be
made. It is critical leaders set departmental performance indicators with
due acknowledgment for the resources needed to achieve them as too often
the burden of performance and blame is levied on departments, middle
management and individuals where identified risk avoidance is under-
resourced.
The managers (department managers, quality and risk managers) are required
to provide a safe environment for practice. They are to ensure that the
protocols and standards of care are adhered to and patients are managed in
a consistent manner. The role of the manager also includes identifying
risks and establish processes to prevent the risks from reaching the
patient with the support from the leader.
The clinicians are required to conduct the procedures/practices in
compliance with their scope of practice, organisational and regulatory
boards.
Conclusion
The usage of an appropriate tool by a qualified person with the right
expertise makes a difference. It would be economically unrealistic to
apply full FMEA processes for every incident or identified risk profile,
so the establishment of an organisational risk severity/probability matrix
needs to be developed so the most appropriate tool is used.
The involvement of key people ensures that a holistic approach is applied
and outcomes of the investigations are implemented with feedback checks
and balances and shared across intra-departments, inter-hospitals and at
national level (11).
References:
1)Peerally MF, Carr S, Waring J, Dixon-Woods M. The problem with root
cause analysis. BMJ Qual Saf. 2016 Aug;1:1-6
2)Bodies swapped: Dead baby mistakenly cremated and daughter finds
mother's body mislabelled at Royal North Shore Hospital [television
broadcast]. Sydney: The Sydney Morning Herald; 2016 Aug 31. Available
from: www.smh.com.au/nsw/daughter-finds-mothers-body-mislabelled-in-morgue
-mixup-at-royal-north-shore-hospital-20160830-gr4g3n.html
3)Joseph AP, Hunyor SN. The Royal North Shore Hospital inquiry: a analysis
of the recommendations and the implications for quality and safety in
Australian public hospitals. Med J Aust. 2008 April ;188(8):469-72
4)Family want justice for fatal gas mix up [television broadcast]. Sydney:
Skynews; 2016 Jul 26. Available from: http://www.skynews.com.au/news/top-
stories/2016/07/26/incorrect-gas-fitting-behind-nsw-baby-death.html
5)Clifford SP, Mick PB, Derhake, BM. A Case of Transfusion Error in a
Trauma Patient with Subsequent Root Cause Analysis Leading to
Instituitional Change. J Investig High Impact Case Rep. 2016 May; 4(2):1-4
6)Van-Galen LS, Struik PW, Driesen BEJM, Merten H, Ludikhuize J, Van der
Spoel JI, Kramer MHH, Nanayakkara PWB. Delayed Recognition of
Deterioration of Patients in General Wards Is Mostly Caused by Human
Related Monitoring Failures: A Root Cause Analysis of Unplanned ICU
Admissions. 2016 Aug; 11(8):1-14
7) Ibrahim JE. What is the quality of our quality managers? Is it time for
quality managers in Australia to be certified? J.Qual Clin Practice.
2000;20(1):32
8)Smetzer JL, Cohen MR. Lessons from the Denver medication error/criminal
negligence case: look beyond blaming individuals. Hosp Pharm. 1998;33:640-
57.
9)Leape L. Error in medicine. JAMA. 1994;272:1851-7
10)Runciman W, Merry A, Smith AM. Improving patients' safety by gathering
information. Anonymous reporting has an important role. BMJ. 2001;323:7308
11)Leape LL. Why should we report adverse incidents? J Eval Clin Pract.
1999;5:1-4
Editor - Professor Knight(1) highlights a serious problem with systems of organisational learning in maternity care that is endemic across a variety of acute care settings in the NHS. I write to share my experience with a trainee based structured case note review method so other organisations and patients may benefit from what I refer to as a black box medicine (BBM) approach to major maternal morbidity. Trainee based mixed expl...
Editor - Professor Knight(1) highlights a serious problem with systems of organisational learning in maternity care that is endemic across a variety of acute care settings in the NHS. I write to share my experience with a trainee based structured case note review method so other organisations and patients may benefit from what I refer to as a black box medicine (BBM) approach to major maternal morbidity. Trainee based mixed explicit and structured implicit (MESI) retrospective case record review (RCRR) methodology attempts to combine the rigour of external review with resource effectiveness of local review.
From personal experience, methodological, logistical and economic barriers often resulted in superficial, subjective and quite unstructured RCRR. Black box medicine evolved from the realisation that organisations needed to improve the RCRR process as learning opportunities were frequently missed. Furthermore, selection of cases for review often focuses on tip of the iceberg phenomenon as resources for a more inclusive review strategy are not available. Consequently patients would continue to be exposed to the same latent suboptimal care. Analysis of care with adverse clinical outcomes in other settings reveals the final common pathway to suboptimal care is failure to recognise and or rescue deteriorating patients. Recommendations for the use of modified obstetric early warning scores(2) reinforce the premise that opportunities to prevent major maternal morbidity lie in the analysis of this final common pathway, and that a BBM approach could enhance organisational learning.
Development of MESI RCRR is not a new concept. Recent work has been published on similar RCRR methods(3,4) and the Royal College of Physicians is developing a national RCRR programme to review adult acute care deaths in England and Scotland. However, logistical burden and cost are still significant, while a focus on general adult mortality is probably not applicable to major obstetric morbidity. Development of MESI RCRR that utilises junior members of a MDT to abstract and analyse most of the clinical information would reduce cost per case but threaten validity of the process. Repeated cycles of case note review at various organisations allowed this current method of trainee based RCRR to develop iteratively. Identification of new methodological issues during each cycle allowed refinement based on principles discussed below.
Acute care is essentially a series of clinical encounters that can be broadly classified as an assessment, intervention or monitoring event. Every care event has commission or omission characteristics that can be judged as part of a quality assessment process. Commission characteristics include timeliness, appropriateness, sufficiency and absence of adverse event. An event is considered as an omission if it did not occur but was indicated in the clinical context. Good maternal care in any clinical context can therefore be universally defined as an episode consisting of care encounters or events that are timely, appropriate, and adequate without adverse event or omission. Judging care events positively or negatively requires further explicit and structured implicit guidance. Explicit guidance allows decisions on events based on basic physiologic rationale and evidence based standards of care. Structured implicit guidance allows the reviewer to consider medico-legal vulnerability of documentation and the quality of clinical encounters by assessing documented content, evidence of cognitive bias or error, detail of contingency plans and documented communication.
By abstracting a predetermined time frame of care and transcribing into a simple database it is possible to generate a timeline of events with those that contribute to suboptimal care highlighted for discussion at a designated MDT meeting. With minimal training senior medical trainees and midwives or clinical coders can abstract the notes allowing most of the labour intensive work to be done with minimal resource before an MDT meeting. With evolution of electronic health care records (EHCR) it is foreseeable that the burden on data mining will reduce considerably. However, many organisations are a long way off implementing EHCR to this level. Participation in the RCRR process also generates a valuable learning for reviewers. Drawing conclusions on the overall quality of care or avoidability of an outcome is an additional step that requires more implicit reasoning and group consensus. This step can be taken during the MDT meeting if needed but should not distract from reflecting more broadly on lessons amenable to recommendations on ways to optimise care.
Hopefully consideration of the MESI RCRR principles outlined above will enable or stimulate obstetric units to undertake more inclusive, frequent and detailed review of major obstetric morbidity. Better organisational learning will most likely be achieved if discussion of RCRR findings has a more reflective focus on ways that care could have been optimised in contrast to debates about avoidable outcomes.
References
1.Shah, A et al. Towards optimising local reviews of severe incidents in maternity care: messages from a comparison of local and external reviews. BMJ Qual Saf 2016;0:1-8
2.Carle C, Alexander P, Columb M, Johal J. Design and internal validation of an obstetric early warning score: secondary analysis of the Intensive Care National Audit and Research Centre Case Mix Programme database. Anaesthesia. 2013;68(4):354-67
3.Hutchinson A, et al. A structured judgement method to enhance mortality case note review: development and evaluation. BMJ Qual Saf 2013;22:12 1032-1040.
4.Hogan H, et al. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012;21:737-745.
It appears that these authors believe that variability in the
disciplinary rates between states is something that indicates a lack of
quality and/or a lack of uniformity of safety measures.
Nothing could be further from the truth.
There are many more reasons affecting a state's disciplinary rates
than those controlled for in the study. For just one glaringly obvious
example, in certain states and i...
It appears that these authors believe that variability in the
disciplinary rates between states is something that indicates a lack of
quality and/or a lack of uniformity of safety measures.
Nothing could be further from the truth.
There are many more reasons affecting a state's disciplinary rates
than those controlled for in the study. For just one glaringly obvious
example, in certain states and in DC many licensees do not ever set foot
in a state, do not touch patients or do anything that could possibly
endanger patient safety.
A state MLB is doing its job ONLY when it carefully considers each
potential disciplinary case on its own merits and the decisions reached
are totally separate and apart from all other cases and decisions in other
states. MLB members should never even be exposed to statistics from other
states, lest they fall prey to the perennial "Public Citizen" ploy of
daring them to play "let's us not be last in the disciplinary contest".
To imply that a narrower spread of disciplinary actions across all
states would reflect enhanced patient safety is ludicrous. What that
would in FACT suggest is that all state MLBs are "grading on a curve"
without regard to actual merit, not bothering to take either their jobs OR
their state patients' safety seriously, but simply attending to their
statistics and averages. A cynic would say "Oh, it's the 15th of the
month, better throw a few more docs under the bus!"
It would also appear that the authors are not aware of the Federation
of State Medical Boards, which does everything in its power to promulgate
standards and policies for disciplinary activities through its
conferences, webinars and publications.
Reynolds et al1 reported the impact of providing prescriber feedback
in reducing prescribing errors. The authors have concluded that reducing
prescribing errors needs a multifaceted approach and feedback alone is not
sufficient. Medication errors are often preventable and inappropriate
prescribing is identified as an important contributing factor to
medication errors.2 It is interesting to note that despite regular
feedb...
Reynolds et al1 reported the impact of providing prescriber feedback
in reducing prescribing errors. The authors have concluded that reducing
prescribing errors needs a multifaceted approach and feedback alone is not
sufficient. Medication errors are often preventable and inappropriate
prescribing is identified as an important contributing factor to
medication errors.2 It is interesting to note that despite regular
feedback, prescribing errors are not improved. It shows a failure in
improving underlying prescribing culture.
We need a shift in how we consider safety issues in an organisation.
It is important to assess underlying safety climate in an organisation.
Medication safety issues should be discussed as part of interdisciplinary
rounding or daily safety huddles. If medication errors were considered as
a safety concern during huddles or interdisciplinary rounding then
prescribers could see how errors can impact individual patients and it may
result in practice changes. Interventions that focus on improving safety
culture has proven to be effective in reducing adverse events in hospitals
such as catheter associated blood stream infections.3
It is also well recognised that a committed leadership and supportive
organisational culture is important in bringing practice changes.4 Senior
doctors play an important role in developing the prescribing culture of
junior doctors. Junior doctors have reported that early in their career,
their senior colleagues primarily influence prescribing practices.5 While
the authors decision to target junior doctors prescribing is based on
valid reasons, the lack of involvement of senior doctors in the feedback
process may have limited its effectiveness in bringing practice changes.
It may have been worthwhile considering the prescribing practices of a
team including the consultants and not just the prescribing of junior
doctors.
In conclusion, it is evident that feedback alone does not change
prescribing practices and we need a shift in our approach towards safety
and build an organisational culture that consider safety as a key
priority.
References:
1. Reynolds M, Jheeta S, Benn J, et al. Improving feedback on junior
doctors' prescribing errors: mixed-methods evaluation of a quality
improvement project. BMJ Qual Saf 2016:bmjqs-2015-004717.
2. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer
health system. Washington D.C: National Academies Press, 2000.
3. Pronovost PJ, Berenholtz SM, Goeschel CA, et al. Creating high
reliability in health care organizations. Health Serv Res
2006;41(4p2):1599-617.
4. Kaplan GS, Patterson SH, Ching JM, et al. Why Lean doesn't work for
everyone. BMJ Qual Saf 2014:bmjqs-2014-003248.
5. De Souza V, MacFarlane A, Murphy AW, et al. A qualitative study of
factors influencing antimicrobial prescribing by non-consultant hospital
doctors. J Antimicrob Chemother 2006;58(4):840-43.
In this article, the authors propose that little evidence exists in
healthcare to show that application of Highly Reliable Organization (HRO)
principles has resulted in significant or sustained improvement in
performance. Further, they attribute the problem partially to under-
recognizing the role of habit in the process. While we fully agree that
forming habitual behavior is essential to creating...
In this article, the authors propose that little evidence exists in
healthcare to show that application of Highly Reliable Organization (HRO)
principles has resulted in significant or sustained improvement in
performance. Further, they attribute the problem partially to under-
recognizing the role of habit in the process. While we fully agree that
forming habitual behavior is essential to creating an HRO, we are far less
pessimistic about the state of affairs in the healthcare industry,
particularly in pediatrics.
Vogus and Hillgoss primarily drew their many references to suboptimal
implementation of HRO principles from the adult patient care arena. In
pediatrics, multiple hospital systems have used HRO principles to achieve
significant and sustained improvement in safety performance related to
specific metrics such as reducing Adverse Drug Events(1), Hand Hygiene
compliance(2), Serious Safety Events(3), as well as reducing all forms of
preventable harm across entire systems(4). We are limited by the number
of references we are allowed to cite, however we could list at least
another dozen papers that show the use of high reliability principles,
effectively implemented and linked to sustained and robust improvement in
clinical outcomes. Further, beginning over a decade ago, pediatric
hospitals joined together to form collaboratives (currently over 100
participants), agreed NOT to compete on quality, to share data, and
discover best practices. The collaboratives implement high reliability
principles and measure compliance with clinical practice expectations in
order to improve outcomes on multiple patient safety related measures.
Compliance to various care practice bundles by participating hospitals is
at the core of these efforts, and the results are significant(5).
We applaud the Virginia Mason Production System (VMPS) and the
Comprehensive Unit-based Safety Program (CUSP) and the results they have
achieved. The cultivation of mindfulness and shared habits through the
various mechanisms described are all variations of similar tactics and
strategies we have been using in pediatrics. However, we do not agree with
the statement that these programs "operate independently of any specific
leader". While specific leaders can and do change, we believe that
vigorous and steady leadership from the top is necessary to maintain the
gains. Therefore, we agree the process can be fragile in the face of
leadership change.
The authors did not discuss an element that we believe is essential
to achieve optimal quality and safety outcomes: transparency. Early in the
Ohio Children's Hospitals' collaborative, the declaration by hospital CEOs
to put inter-hospital rivalries and competition aside for the sake of
safety was made. Their stated goal was to develop the kind of trust which
allowed open data and best practice sharing, so that a culture of rapid
information sharing (all teach, all learn) could develop. Real and
sustained progress requires internal and external transparency and we
suggest the Ohio Solutions for Patient Safety Collaborative results
support this belief(5).
To be clear, in pediatrics we are not yet where we want to be
regarding quality and safety. Most systems have declared elimination of
preventable harm as the ultimate goal - recognizing that, in many ways, it
is an un-ending journey, but the only truly acceptable aspirational goal.
Yet pediatrics is making measurable and significant progress using HRO
principles, and we fully embrace the importance of developing mindful
habits as an important part of that process.
References
1. McClead RE, Catt C, Davis JT, Morvay S, Merandi J, Lewe D, Stewart B,
Brilli RJ. An Internal Quality Improvement Collaborative Significantly
Reduces Hospital-wide Medication Error Related Adverse Drug Events. J
Pediatr 2014;165(6):1222-9.
2. Toltziz P, O'Riordan M, Cunningham DJ, Ryckman FC, Bracke TM, Olivea
J, Lyren A. A Statewide Collaborative to Reduce Pediatric Surgical Site
Infections. Pediatrics 2014;134(4):e1174-e1180.
3. Muething SE, Goudie A, Schoettker PJ, Donnelly LF, Goodfriend MA,
Bracke TM, Brady PW, Wheeler DS, Anderson JM, Kotagal UR. Quality
Improvement Initiative to Reduce Serious Safety Events and Improve Patient
Safety Culture. Pediatrics 2012;130:e423-e431.
4. Brilli RJ, McClead RE Jr, Crandall WV, Stoverock L, Berry JC, Wheeler
TA, Davis JT. A comprehensive patient safety program can significantly
reduce preventable harm, associated costs, and hospital mortality. J
Pediatr. 2013 Dec;163(6):1638-45. Epub 2013 Jul 30.
5. Lyren A, Brilli R, Bird M, Lashutka N, Muething S. Ohio Children's
Hospitals' Solutions for Patient Safety: A Framework for Pediatric Patient
Safety Improvement. J Healthc Qual 2015;38(4):213-222.
Dhaliwal's comment [1] on Zwaan et al [2] nicely refutes what has been called "the hypothesis of special cause" [3] - the notion that when things turn out wrong, the cognitive processes leading to that outcome must have been fundamentally different (ie, error-prone) from when they turn out right. Dhaliwal's argument recapitulates thinking that is over 100 years old; one of the early contributors to psychology, Ernst Mach, wr...
Dhaliwal's comment [1] on Zwaan et al [2] nicely refutes what has been called "the hypothesis of special cause" [3] - the notion that when things turn out wrong, the cognitive processes leading to that outcome must have been fundamentally different (ie, error-prone) from when they turn out right. Dhaliwal's argument recapitulates thinking that is over 100 years old; one of the early contributors to psychology, Ernst Mach, wrote (in 1905): "Knowledge and error flow from the same mental source; only success can tell one from the other" [4].
What is interesting here is not that the hypothesis of special cause is wrong, but rather the question of why has it been so popular and persistent. What is it about the notion of humans as fundamentally irrational, poor decision-makers that gives this idea such wide appeal? After all, broad acceptance of this sort is not the norm for most psychological or medical research; controversy, argument, or outright disbelief are much more common [5]. Christensen-Szalanski and Beach surveyed decision-making studies in psychology and reported that, although the studies' conclusions were roughly evenly divided between finding good or poor decision-making performance (56% vs 44%), studies reporting human performance as flawed were cited almost 6 times more frequently than those reporting it good. Citations outside of psychology journals were overwhelmingly used to advance the claim that people are poor decision-makers [5].
One reason for this strange popularity is that the people-are-irrational claim provides benefits for those who have rationality to sell: guideline authors, health care managers, and other proponents of scientific-bureaucratic medicine [6,7]. Another is that it paradoxically provides individual benefits: once we understand the clever puzzles of heuristics and biases problems, even in retrospect, we tend to feel that we must be pretty clever also. And a final, and likely strongest influence, is that it protects organizations and elites: attributing adverse events to flawed mental processes at the front lines serves as a kind of lightning rod, conducting the harmful consequences of bad outcomes down an organizationally safe pathway [8].
Unfortunately, the history of patient safety to date does not suggest that cautions such as Dhaliwal's will have much effect; such cautions have been raised and ignored before [9-12]. Patient safety's fixation on 'medical error' as the fundament of medical harm serves many (perhaps extraneous) purposes, but is based on an ontological will-of-the-wisp [3,13,14]. Given general agreement on the meagre progress of the patient safety movement to date [15-18], a fundamental re-thinking of our basic premises and hidden assumptions is desperately needed if we are to move forward. And as with many fixations, a sea-change of this sort is not likely to come from within the present patient safety movement, but must come from the outside [19,20]. We can only hope 'these barbarians' challenge us sooner rather than later [21].
References
1. Dhaliwal G. Premature closure? Not so fast. BMJ Quality & Safety 2016 bmjqs-2016-005267:online ahead of print.
2. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Quality & Safety 2016.
3. Hollnagel E. Safety-I and Safety-II: The Past and Future of Safety Management. Farnham, UK: Ashgate; 2014, 187 pages.
4. Mach E. Knowledge and Error. Translated by Foulkes P, McCormack TJ. Dordrecht, Netherlands: Reidel Publishing Co; 1905 (English translation 1976), 393 pages.
5. Lopes LL. The Rhetoric of Irrationality. Theory & Psychology 1991;1(1):65-82.
6. Harrison S, Moran M, Wood B. Policy emergence and policy convergence: the case of 'scientific-bureaucratic medicine' in the United States and United Kingdom. The British Journal of Politics & International Relations 2002;4(1):1-24.
7. Wears RL, Hunte GS. Seeing patient safety 'Like a State'. Safety Science 2014;67:50-57.
8. Cook RI, Nemeth C. "Those found responsible have been sacked": some observations on the usefulness of error. Cogn Technol Work 2010;12(1):87-93.
9. Henriksen K, Kaplan H. Hindsight bias, outcome knowledge and adaptive learning. Qual Saf Health Care 2003;12(Suppl 2):ii46-ii50.
10. Dekker SWA. Patient Safety: A Human Factors Approach. Boca Raton, FL: CRC Press; 2011, 250 pages.
11. Hollnagel E. Does human error exist? In: Senders JW, Moray NP, eds. Human Error: Cause, Prediction, and Reduction. Hillsdale, NJ: Lawrence Erlbaum Associates; 1991: pp 153.
12. Wears RL. The error of chasing 'error'. Northeast Florida Medicine 2007;58(3):30-31.
13. Dekker SWA. Is it 1947 yet? http://www.safetydifferently.com/is-it-1947-yet/, accessed 19 May 2015.
15. National Patient Safety Foundation. Free From Harm: Accelerating Patient Safety Improvement Fifteen Years after To Err Is Human. Cambridge, MA: National Patient Safety Foundation; 2015, http://www.npsf.org/custom_form.asp?id=03806127-74DF-40FB-A5F2-238D8BE6C24C, accessed 8 December 2015, 59 pages.
16. Pronovost PJ, Ravitz AD, Stoll RA, Kennedy SB. Transforming Patient Safety: A Sector-Wide Systems Approach: Report of the WISH Patient Safety Forum 2015. Qatar: World Innovation Summit for Health; 2015, http://dpnfts5nbrdps.cloudfront.net/app/media/1430, accessed 18 February 2015, 52 pages.
17. Baker GR, Black G. Beyond the Quick Fix. Toronto, ON: University of Toronto; 2015, http://ihpme.utoronto.ca/wp-content/uploads/2015/11/Beyond-the-Quick-Fix-Baker-2015.pdf, accessed 12 November 2015, 32 pages.
18. Illingworth J. Continuous improvement of patient safety: the case for change in the NHS. London, UK: The Health Foundation; 2015, http://www.health.org.uk/sites/default/files/ContinuousImprovementPatientSafety.pdf, accessed 12 November 2015, 40 pages.
19. De Keyser V, Woods DD. Fixation Errors: Failures to Revise Situation Assessment in Dynamic and Risky Systems. In: Colombo AG, de Bustamante AS, eds. Systems Reliability Assessment: Springer Netherlands; 1990: pp 231-251.
20. Woods DD, Cook RI. Perspectives on human error: hindsight biases and local rationality. In: Durso FT, Nickerson RS, Schvaneveldt RW, et al., eds. Handbook of Applied Cognition. 1st ed. New York, NY: John Wiley & Sons; 1999: pp 141-171.
21. Cavafy C. Waiting for the Barbarians. http://www.cavafy.com/poems/content.asp?id=119&cat=1 . accessed 6 March 2014.
With great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
Show MoreIn 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for...
This study uses rigorous analysis to obtain important insights about the realtime information that our patients are handed at discharge. It is puzzling that the EMRs used were not named. One can infer from a look through the MSU website that they have both Cerner and Epic, but why is that necessary? The heart of quality/safety work is one of transparency balanced by humility, i.e. we shouldn't expect our IT systems to be any more perfect than we are, but they won't improve if we don't have more openness. The lack of scientific foundations and published post-marketing surveillance for our EHRs, especially the ascendant ones, was initially surprising. However, as they achieve complete market dominance, with less overt scientific review and public guidance and commentary, the silence is deafening. Is the BMJQS's failure to simply identify the names (or maybe I missed the citations) an oversight, or part of nondisclosure agreements with the vendors at the MSU institutions or at BMJQS?
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
Show MoreWith binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though...
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
AHRQ’s collaboration w...
Show MoreI read with interest the article by Peerally et al (1) on 'The problem with root cause analysis'. I reflected on the recent cases that happened at Royal North Shore Hospital and Sydney Hospital (2,3,4) which led me to consider which investigative tool is best applied to different incidences and identified risks. The use of appropriate tools and involvement of key stakeholders are crucial elements to a successful investig...
It appears that these authors believe that variability in the disciplinary rates between states is something that indicates a lack of quality and/or a lack of uniformity of safety measures.
Nothing could be further from the truth.
There are many more reasons affecting a state's disciplinary rates than those controlled for in the study. For just one glaringly obvious example, in certain states and i...
Reynolds et al1 reported the impact of providing prescriber feedback in reducing prescribing errors. The authors have concluded that reducing prescribing errors needs a multifaceted approach and feedback alone is not sufficient. Medication errors are often preventable and inappropriate prescribing is identified as an important contributing factor to medication errors.2 It is interesting to note that despite regular feedb...
To the Editor:
In this article, the authors propose that little evidence exists in healthcare to show that application of Highly Reliable Organization (HRO) principles has resulted in significant or sustained improvement in performance. Further, they attribute the problem partially to under- recognizing the role of habit in the process. While we fully agree that forming habitual behavior is essential to creating...
Dhaliwal's comment [1] on Zwaan et al [2] nicely refutes what has been called "the hypothesis of special cause" [3] - the notion that when things turn out wrong, the cognitive processes leading to that outcome must have been fundamentally different (ie, error-prone) from when they turn out right. Dhaliwal's argument recapitulates thinking that is over 100 years old; one of the early contributors to psychology, Ernst Mach, wr...
Pages