Thank you very much for your letter. We agree that the Schmidtke et al paper is highly relevant. In our discussion we note that 'recent research has emphasised the importance of meaningful representation and interpretation of data by boards', citing the accompanying editorial by Mountford and Wakefield which provides an overview both of the Schmidtke et al paper and another paper from the same issue by Anhøj et al on 'Red Amber Green' stoplight reports.
Thanks to the authors for this insight. I wondered if they had seen this content http://qualitysafety.bmj.com/content/26/1/61 from Schmidtke et al. which deals with how boards are presented with data, including the consideration of chance (common cause variation). The material seems highly compatible.
Badawy et al describe, using statistical analysis, potential inaccuracy in the recording of respiratory rates (RR) in a large cohort of inpatients across a range of inpatient settings and add to the body of data suggesting widespread inaccuracy in the measurement of RR.1 The accurate recording of RR is an important safety and quality issue and the data provided by Badawy et al further underlines the challenges with measurement of this parameter in the inpatient setting.2 Having elegantly demonstrated the problem, the extension of this finding is a need to explore what methods can be potentially employed to improve the accuracy and recording of RR measurement.
Several potential validated solutions may be adduced to address the deficiency in accurate RR measurement and recording. First, consideration could be given to introduction of a system of audit whereby healthcare workers are observed recording RR measurements during their routine practice. Despite a likely Hawthorne effect, the results of this can be collated then non-punitively and anonymously presented to organizational governance structures and health care workers. This concept has been successfully applied into staff hand hygiene quality improvement implementation with this approach having been shown to improve staff performance in this domain with an attendant systematic reduction in adverse event rates.3
Second, the provision of technological solutions, such as a touch pad ba...
Badawy et al describe, using statistical analysis, potential inaccuracy in the recording of respiratory rates (RR) in a large cohort of inpatients across a range of inpatient settings and add to the body of data suggesting widespread inaccuracy in the measurement of RR.1 The accurate recording of RR is an important safety and quality issue and the data provided by Badawy et al further underlines the challenges with measurement of this parameter in the inpatient setting.2 Having elegantly demonstrated the problem, the extension of this finding is a need to explore what methods can be potentially employed to improve the accuracy and recording of RR measurement.
Several potential validated solutions may be adduced to address the deficiency in accurate RR measurement and recording. First, consideration could be given to introduction of a system of audit whereby healthcare workers are observed recording RR measurements during their routine practice. Despite a likely Hawthorne effect, the results of this can be collated then non-punitively and anonymously presented to organizational governance structures and health care workers. This concept has been successfully applied into staff hand hygiene quality improvement implementation with this approach having been shown to improve staff performance in this domain with an attendant systematic reduction in adverse event rates.3
Second, the provision of technological solutions, such as a touch pad based application to record respiratory rates using finger tapping may also have a role in improving accuracy and has been demonstrated in paediatric settings to be potentially effective.4 This technology employs an algorithm whereby the interval between taps (each tap corresponding to a breath observed) is used to calculate a RR. This provides a real-time self-refining measurement of respiratory rate, with more taps generating greater accuracy. To further improve accuracy, and data utility, results could be directly fed into a real-time electronic medical record system.
Finally, complementing the introduction of data collection on performance (with audit of that data) and the potential integration of technological assistive structures would also be the promulgation of education measures. Education measures could focus staff on the data around the historical inaccuracy of RR recording, the assistive technology initiatives being put into place and the importance of accurate measurement for safety and quality. In addition, ongoing feedback to healthcare staff of observed accuracy, as done for hand hygiene measures, would also be important. Multifaceted education of this nature has been shown to be effective for other quality change initiatives.5
In conclusion, a combination of integrated observation and audit, technological implementation and integration and staff education could be used to address the important challenges in measurement of respiratory rate identified by Badawy et al.
References:
1. Badawy J, Nguyen OK, Clark C, et al. Is everyone really breathing 20 times a minute? Assessing epidemiology and variation in recorded respiratory rate in hospitalised adults. BMJ Qual Saf 2017:bmjqs-2017.
2. Fieselmann JF, Hendryx MS, Helms CM, et al. Respiratory rate predicts cardiopulmonary arrest for internal medicine inpatients. J Gen Intern Med 1993;8:354–60.
3. Pittet D, Hugonnet S, Harbarth S et al. Effectiveness of a hospital-wide programme to improve compliance with hand hygiene. The Lancet 2000;356(9238):1307-12.
4. Karlen W, Gan H, Chiu M, et al. Improving the accuracy and efficiency of respiratory rate measurements in children using mobile devices. PLoS One 2014;9(6):e99266.
5. Naikoba S, Hayward A. The effectiveness of interventions aimed at increasing handwashing in healthcare workers-a systematic review. Journal of Hospital Infection 2001;47(3):173-80.
With great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
In 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for...
With great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
In 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for improvement (n=95 suggestions given by 68 respondents) were given. The oral survey provided more often suggestions for improvement compared with the app (50 vs. 18 respondents). Suggestions for improvement were more frequently made by relatives compared with patients (57 suggestions given by 37 relatives vs. 38 suggestions given by 31 patients). All improvement suggestions were classified to one of six categories: ‘Surroundings’ 48/95 (51%), ‘Information, communication and education’ 23/95 (24%), ‘Patient care’ 15/95 (16%), ‘Attitude, handling and relation caregiver with patient/relatives 7/95 (7%), ‘Emotional support’ 1/95 (1%) and ‘Care for relatives’ 1/95 (1%).
This simple study showed that an oral survey results in more suggestions for improvement than an app. The lack of complexity of the survey resulted in very specific, useful and practical suggestions for improvement, which were easily transformed into clear recommendations, such as: “respect sufficient rest of our patients” or “don’t forget to provide food to the patients who are able to eat”. The survey can easily be repeated in the course of time. These results may give a new perspective on how to conduct feedback studies.
The key suggestions for improvement found in this study were presented to the department in the form of a coat rack, which was an improvement option frequently mentioned by relatives (A coat rack was missing in one of our family rooms). This coat rack will be hung in central places in our department. On this coatrack recommendations based on the most important improvement suggestions will be hung. We think this is one example of a simple, but practical solution to make feedback more useful: every month the recommendations will be replaced by new ones, reminding all caregivers in our department of the feedback given by our patients and their relatives and thereby striving to improve our care.
We are well aware of the fact that the surveys used in the studies described in the article of Flott et al1 are much larger and more complex than the one we used in our study. We just wanted to show that a learning point could be: don’t overcomplicate.
References
1. Flott KM, Graham C, Darzi A, Mayer E. Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Qual Saf 2017;26:502-507.
2. Jensen HI, Gerritsen RT, Koopmans M, Zijlstra JG, Randall Curtis J, Ording H. Families’ experiences of intensive care unit quality of care: Development and validation of a European questionnaire (euroQ2). Journal of Critical Care 2015;30(5):884-890.
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
With binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though...
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
With binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though it may be representative of few or none of the monthly data values. This makes the control limits meaningless. A probable way round this is to employ confidence limits for the monthly rates. Viewed as a likelihood supported range this enables the extent of each of the monthly counts or rates to be assessed. If a GAM analysis is added to this the predicted rate and its confidence limits can also be obtained throughout the time series.2
This approach is more in keeping with the complexity of the processes responsible for the AE than is standard SPC that was not designed to deal with complex systems.
As an aside, it is worth noting that some swamps may be valuable ecosystems. This popular analogy is thus a poor one. Like root-cause analysis it belongs to the area of simple/complicated systems, not complex ones.
1. Morton, A., Whitby, M., Tierney, N., Sibanda, N. and Mengersen, K. 2016. Statistical Methods for Hospital Monitoring. Wiley StatsRef: Statistics Reference Online. 1–8.
2. Morton A, Mengersen K, Whitby M. and Playford G. Statistical Methods for Hospital Monitoring with R. Chichester John Wiley and Sons 2013.
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
AHRQ’s collaboration with researchers in the Palo Alto Medical Foundation (PAMF) Research Institute provides a powerful example of how partnership between fully embedded researchers and external funding agencies contributes to health system learning. AHRQ partnered with Kaiser Permanente and PAMF researchers to study implementation of a Lean-based redesign to improve care delivery efficiency in PAMF’s primary care clinics. (2) Applying Lean analysis techniques, PAMF discovered inefficiencies in a pilot primary care clinic and redesigned work roles and work flow to enhance coordination among physicians and to better support them. Key changes included:
• New roles for medical assistants as a “flow managers,” facilitating physician’s work and performing administrative tasks like handling email that previously burdened physicians
• New workflows – including daily huddles for scheduling; agenda setting during patient visits
• Co-location of physician-medical assistant teams in a shared workspace.
PAMF then tested these new roles and processes in three additional clinics, assessed the improvements’ effects, and rolled the changes out to 13 additional clinics.
PAMF researchers interviewed staff to uncover factors influencing successful implementation of these changes and system requirements for successful redesign of care. (3-4) To assess changes in efficiency, they analyzed rich and timely internal data sources such as:
• Physician efficiency metrics derived from PAMF’s time-stamped EHR data and other operational sources
• PAMF’s routine patient and personnel surveys
• Standardized quality metrics that PAMF reports.
Their research showed that PAMF’s primary care redesigns boosted efficiency without sacrificing quality and satisfaction. (5) AHRQ and PAMF disseminated these valuable findings widely through practice –oriented briefs, conference presentations, and webinars, as well as in peer-reviewed papers.
PAMF’s fully embedded researchers promoted internal learning by tracking progress and outcomes of the Lean improvement efforts and providing feedback to their system’s leaders and staff. AHRQ and the PAMF researchers promoted system-wide learning about Lean-based primary care redesign by broadly disseminating the study’s findings and implementation lessons.
3. Hung D, Gray C, Martinez M, Schmittdiel J, Harrison, MI. Acceptance of Lean redesigns in primary care: a contextual analysis. Health Care Manage Rev 2017; 42:203-212.
4. Gray C, Harrison MI, Hung D. Medical assistants as flow managers in primary care: challenges and recommendations. J. Healthc Manag 2016; 61:181-191.
5. Hung D, Harrison MI, Martinez M, Luft H. Scaling Lean in primary care: impacts on system performance. Am J Manag Care 2017; 23(3):161-168.
This study uses rigorous analysis to obtain important insights about the realtime information that our patients are handed at discharge. It is puzzling that the EMRs used were not named. One can infer from a look through the MSU website that they have both Cerner and Epic, but why is that necessary? The heart of quality/safety work is one of transparency balanced by humility, i.e. we shouldn't expect our IT systems to be any more perfect than we are, but they won't improve if we don't have more openness. The lack of scientific foundations and published post-marketing surveillance for our EHRs, especially the ascendant ones, was initially surprising. However, as they achieve complete market dominance, with less overt scientific review and public guidance and commentary, the silence is deafening. Is the BMJQS's failure to simply identify the names (or maybe I missed the citations) an oversight, or part of nondisclosure agreements with the vendors at the MSU institutions or at BMJQS?
In this article, the authors propose that little evidence exists in
healthcare to show that application of Highly Reliable Organization (HRO)
principles has resulted in significant or sustained improvement in
performance. Further, they attribute the problem partially to under-
recognizing the role of habit in the process. While we fully agree that
forming habitual behavior is essential to creating...
In this article, the authors propose that little evidence exists in
healthcare to show that application of Highly Reliable Organization (HRO)
principles has resulted in significant or sustained improvement in
performance. Further, they attribute the problem partially to under-
recognizing the role of habit in the process. While we fully agree that
forming habitual behavior is essential to creating an HRO, we are far less
pessimistic about the state of affairs in the healthcare industry,
particularly in pediatrics.
Vogus and Hillgoss primarily drew their many references to suboptimal
implementation of HRO principles from the adult patient care arena. In
pediatrics, multiple hospital systems have used HRO principles to achieve
significant and sustained improvement in safety performance related to
specific metrics such as reducing Adverse Drug Events(1), Hand Hygiene
compliance(2), Serious Safety Events(3), as well as reducing all forms of
preventable harm across entire systems(4). We are limited by the number
of references we are allowed to cite, however we could list at least
another dozen papers that show the use of high reliability principles,
effectively implemented and linked to sustained and robust improvement in
clinical outcomes. Further, beginning over a decade ago, pediatric
hospitals joined together to form collaboratives (currently over 100
participants), agreed NOT to compete on quality, to share data, and
discover best practices. The collaboratives implement high reliability
principles and measure compliance with clinical practice expectations in
order to improve outcomes on multiple patient safety related measures.
Compliance to various care practice bundles by participating hospitals is
at the core of these efforts, and the results are significant(5).
We applaud the Virginia Mason Production System (VMPS) and the
Comprehensive Unit-based Safety Program (CUSP) and the results they have
achieved. The cultivation of mindfulness and shared habits through the
various mechanisms described are all variations of similar tactics and
strategies we have been using in pediatrics. However, we do not agree with
the statement that these programs "operate independently of any specific
leader". While specific leaders can and do change, we believe that
vigorous and steady leadership from the top is necessary to maintain the
gains. Therefore, we agree the process can be fragile in the face of
leadership change.
The authors did not discuss an element that we believe is essential
to achieve optimal quality and safety outcomes: transparency. Early in the
Ohio Children's Hospitals' collaborative, the declaration by hospital CEOs
to put inter-hospital rivalries and competition aside for the sake of
safety was made. Their stated goal was to develop the kind of trust which
allowed open data and best practice sharing, so that a culture of rapid
information sharing (all teach, all learn) could develop. Real and
sustained progress requires internal and external transparency and we
suggest the Ohio Solutions for Patient Safety Collaborative results
support this belief(5).
To be clear, in pediatrics we are not yet where we want to be
regarding quality and safety. Most systems have declared elimination of
preventable harm as the ultimate goal - recognizing that, in many ways, it
is an un-ending journey, but the only truly acceptable aspirational goal.
Yet pediatrics is making measurable and significant progress using HRO
principles, and we fully embrace the importance of developing mindful
habits as an important part of that process.
References
1. McClead RE, Catt C, Davis JT, Morvay S, Merandi J, Lewe D, Stewart B,
Brilli RJ. An Internal Quality Improvement Collaborative Significantly
Reduces Hospital-wide Medication Error Related Adverse Drug Events. J
Pediatr 2014;165(6):1222-9.
2. Toltziz P, O'Riordan M, Cunningham DJ, Ryckman FC, Bracke TM, Olivea
J, Lyren A. A Statewide Collaborative to Reduce Pediatric Surgical Site
Infections. Pediatrics 2014;134(4):e1174-e1180.
3. Muething SE, Goudie A, Schoettker PJ, Donnelly LF, Goodfriend MA,
Bracke TM, Brady PW, Wheeler DS, Anderson JM, Kotagal UR. Quality
Improvement Initiative to Reduce Serious Safety Events and Improve Patient
Safety Culture. Pediatrics 2012;130:e423-e431.
4. Brilli RJ, McClead RE Jr, Crandall WV, Stoverock L, Berry JC, Wheeler
TA, Davis JT. A comprehensive patient safety program can significantly
reduce preventable harm, associated costs, and hospital mortality. J
Pediatr. 2013 Dec;163(6):1638-45. Epub 2013 Jul 30.
5. Lyren A, Brilli R, Bird M, Lashutka N, Muething S. Ohio Children's
Hospitals' Solutions for Patient Safety: A Framework for Pediatric Patient
Safety Improvement. J Healthc Qual 2015;38(4):213-222.
I read with interest the paper by Gillespie and Reader presenting the
Healthcare Complaints Analysis Tool (HCAT) (1). The authors suggest that
the HCAT could be used "as an alternative metric of success in meeting
standards" and as a way "to benchmark units or regions". However, this
makes the assumption that the volume and strength of complaints received
is an accurate reflection of the standard of care being delivered....
I read with interest the paper by Gillespie and Reader presenting the
Healthcare Complaints Analysis Tool (HCAT) (1). The authors suggest that
the HCAT could be used "as an alternative metric of success in meeting
standards" and as a way "to benchmark units or regions". However, this
makes the assumption that the volume and strength of complaints received
is an accurate reflection of the standard of care being delivered. In
fact, it may be more heavily influenced by the ability and willingness of
patients (or their relatives) to make a complaint. A hospital or unit
could have a poor standard of care but receive few complaints, especially
if it has a high proportion of patients from demographic groups that are
less likely to complain. For example, a recent report from the
Parliamentary and Health Service Ombudsman found far fewer complaints from
the elderly than would be expected based upon their service usage (2).
Patients from certain ethnic minorities and less affluent social grades
have also been identified as groups less likely to complain (3). Moreover,
many complaints may be verbalised but not formally articulated in a
written statement (4). The HCAT may have a valuable role in organising
complaints, but using it to benchmark quality as the authors suggest could
be misleading and give a false sense of reassurance. We must have a
mechanism to systematically assess poor quality care and whilst written
patient complaints can be part of this, it should not be regarded as an
independent metric of quality.
1. Gillespie A and Reader TW. The Healthcare Complaints Analysis
Tool: development and reliability testing of a method for service
monitoring and organisational learning. BMJ Qual. Saf. 2016 25:937-946
2. Breaking down the barriers: older people and complaints about
health care. Parliamentary and Health Service Ombudsman December 2015.
Available at: http://www.ombudsman.org.uk/about-us/news-centre/press-
releases/2015/frail-older-people-too-afraid-to-complain-about-poor-care
[accessed 11/12/16]
3. Fear of raising concerns about care. A research report for the
Care Quality Commission. April 2013. Available at:
https://www.cqc.org.uk/sites/default/files/documents/201304_fear_of_raising_complaints_icm_care_research_report_final.pdf
[accessed 11/12/16]
4. Cornwell J, Levenson R, Sonola L, Poteliakhoff E. Continuity of
care for older hospital patients. A call for action. The King's Fund,
March 2012. Available at:
https://www.kingsfund.org.uk/sites/files/kf/field/field_publication_file/continuity
-of-care-for-older-hospital-patients-mar-2012.pdf [accessed 11/12/16]
We have read with great interest the article by Schiff G D et al.,1
in which 6.1% of errors reported to the United States Pharmacopeia MEDMARX
reporting system were classified as being related to the computerized
prescription order entry (CPOE) system, representing the third most
frequently reported errors in this notification system.
Similarly, in a study conducted in our hospital, appro...
We have read with great interest the article by Schiff G D et al.,1
in which 6.1% of errors reported to the United States Pharmacopeia MEDMARX
reporting system were classified as being related to the computerized
prescription order entry (CPOE) system, representing the third most
frequently reported errors in this notification system.
Similarly, in a study conducted in our hospital, approximately 24% of
drug-related problems were due to the use of the CPOE.2 This type of error
was more frequently detected after a team of clinical pharmacists reviewed
the drug treatment of hospital inpatients.
One of the major limitations of current classifications of drug-
related problems is that they do not include the various types of CPOE-
related errors.3,4 Consequently, Schiff et al. developed a new taxonomy
for this type of error, which is essential for epidemiological
surveillance and for the continual improvement of the safety of CPOE
systems. These authors identified the 25 most frequent CPOE-related
errors. Similarly, the 8 most frequent types of CPOE-related error in our
study were the following: 1) Drugs included in the hospital formulary but
prescribed as "not available" in the CPOE (for example, spelling mistakes
or the use of the brand rather than the generic name lead to a failure to
find the drug in the application). 2) Duplicate orders (exact same drug
and dosage). 3) Incorrect entry of a prescribed dose resulting in a higher
or lower recommended dosage. 4) Inappropriate frequency of administration
(frequencies are often specified in a free text comment). For example,
digoxin, 1 tablet per day; the free text comments field may state "except
Saturday and Sunday". Since the free text comment bypasses the computer
circuit designed for discontinuous regimens, the medication chart will
state that this drug should be administered on Saturdays and Sundays. 5)
Inappropriate route of administration. 6) Inappropriate treatment duration
(due to failure to use of the end-date field or days of duration).7)
Unintended discrepancies in dosage (prescribed dosage different from
patients' existing dosage). 8) Designation of a clinical trial drug as
"not included in the hospital formulary" instead of the use of a specific
clinical trials application for the CPOE.
Unlike the study by Schiff et al., one of the most frequently
encountered CPOE-related errors in our experience was prescription of a
drug included in the hospital formulary using an option in the CPOE
designed for those drugs not available in the formulary, which can lead to
a delay in administering the drug to the patient, because the prescribed
drug requires pharmaceutical validation as if it were not included in the
formulary and nursing staff do not visualize it as included in the
medication chart to be administered. Unlike our study, one of the codes
identified by Schiff et al. was nursing administration issues. The lack of
this type of error in our study was due to the specific implantation,
parallel to that of the CPOE, of a computerized application that includes
information on the mode of drug administration for nursing staff with the
aim of unifying this process in the hospital.5 Together with this
information, the application allows the time of administration of each
drug to be specified, such that the time of administration will appear
automatically after its prescription, as well as the compatible diluent(s)
for those drugs requiring dilution.
Our study may have identified a lower number of types of CPOE-related
errors because the sample was drawn from a single hospital and because we
included only those errors due to the use of the CPOE.
In addition to analysing CPOE-related errors, Schiff et al. also
evaluated the causes of these errors and identified codes for their
prevention. Similarly, in our hospital, several strategies were
progressively adopted to reduce this type of error. Thus, administration
units were adapted to paediatric patients, numerous computerized protocols
were designed to standardise drugs associated with specific processes, and
the CPOE was modified to allow visualization, in the lower part of the
admission order, of the drug and dosage previously taken by the patient
before admission. However, several safety aspects related to the CPOE
remain to be resolved.
One of the limitations of our study is that it is difficult to extrapolate
the CPOE system to other hospital settings, given that the system was
designed and developed specifically for the characteristics of our
hospital and is not commercially available. One of the strengths of our
study is that the data are drawn from a prospective review of all the drug
treatments by a team of clinical pharmacists, while data from other
studies have been drawn from voluntary notification systems, which could
lead to underdetection of errors as well as a lack of data on their
registration.
In our opinion, the study by Schiff et al. is a highly valuable
contribution, because, in addition to providing a new classification of
CPOE-related errors, it also describes strategies for their prevention.
Given the strong impact of this type of errors, a common classification
system for CPOE-related errors is essential. Such a system would allow
benchmarking between different hospitals independently of the CPOE system
used, which in turn would allow the development of error prevention
systems and/or new CPOE systems to avoid them.
Olatz Urbina Bengoa1, Olivia Ferrandez Quirante1, Marta De Antonio
Cusco1, Nuria Carballo Martinez1
,Santiago Grau Cerrato1
1Pharmacy Department, Hospital del Mar
Pg Maritim, 25-29, CP 08003-Barcelona
Tel: 932483704
Fax: 932483256
References
1. Schiff GD, Amato MG, Eguale T, et al. Computerised physician order
entry-related medication errors: analysis of reported errors and
vulnerability testing of current systems. BMJ Qual Saf 2015;24(4):264-71.
2. Urbina O, Ferrandez O, Grau S, et al. Design of a score to
identify hospitalized patients at risk of drug-related problems.
Pharmacoepidemiol Drug Saf 2014;23(9):923-32.
3. Pharmaceutical Care Network Europe. The PCNE classification for
drug-related problems V 6.2 [Internet]. 2010; http://www.pcne.org.
(Accesed 22 Nov 2016)
4. van Mil JW, Westerlund LO, Hersberger KE, Schaefer MA. Drug-
related problem classification systems. Ann Pharmacother 2004;38(5):859-
67.
5. Salas E, Bastida M, Grau S, et al. Quality project to improve drug
administration in the hospital trust of thecitycouncil of Barcelona.
International Forum on Quality and Safety in Health Care. British Medical
Journal Group. Barcelona, April 2007. (Oral communication). (Data not
published).
Thank you very much for your letter. We agree that the Schmidtke et al paper is highly relevant. In our discussion we note that 'recent research has emphasised the importance of meaningful representation and interpretation of data by boards', citing the accompanying editorial by Mountford and Wakefield which provides an overview both of the Schmidtke et al paper and another paper from the same issue by Anhøj et al on 'Red Amber Green' stoplight reports.
Thanks to the authors for this insight. I wondered if they had seen this content http://qualitysafety.bmj.com/content/26/1/61 from Schmidtke et al. which deals with how boards are presented with data, including the consideration of chance (common cause variation). The material seems highly compatible.
To the Editor,
Badawy et al describe, using statistical analysis, potential inaccuracy in the recording of respiratory rates (RR) in a large cohort of inpatients across a range of inpatient settings and add to the body of data suggesting widespread inaccuracy in the measurement of RR.1 The accurate recording of RR is an important safety and quality issue and the data provided by Badawy et al further underlines the challenges with measurement of this parameter in the inpatient setting.2 Having elegantly demonstrated the problem, the extension of this finding is a need to explore what methods can be potentially employed to improve the accuracy and recording of RR measurement.
Several potential validated solutions may be adduced to address the deficiency in accurate RR measurement and recording. First, consideration could be given to introduction of a system of audit whereby healthcare workers are observed recording RR measurements during their routine practice. Despite a likely Hawthorne effect, the results of this can be collated then non-punitively and anonymously presented to organizational governance structures and health care workers. This concept has been successfully applied into staff hand hygiene quality improvement implementation with this approach having been shown to improve staff performance in this domain with an attendant systematic reduction in adverse event rates.3
Second, the provision of technological solutions, such as a touch pad ba...
Show MoreWith great interest we read the article of Flott et. al. (1), describing the challenges of using patient-reported feedback. We recognize the challenges described and performed a bachelorproject in the intensive care unit (ICU) in the University Medical Center Groningen (UMCG). We think the results from our project provide a potential promising practical solution to make feedback more useful.
Show MoreIn 2013 the UMCG participated in an independent multi-center study conducted among relatives of ICU patients (2). In the open questions of the questionnaire more dissatisfaction than expected was found, which fueled the quest for an alternative, simple and continuous feedback system. In this study we compared the quality and amount of feedback gathered by an oral survey during the first two weeks and an app during the consecutive two weeks.
Between February 20th and March 18th 2017, patients above sixteen years old, listed for discharge from the ICU that day and their relatives were approached to participate in this study. The oral survey consisted of two simple questions: “How satisfied are you with your stay in the ICU? (grade 1-10)” and ”Do you have specific suggestions of improvement for the ICU?”. The RateIt app (Rate It Limited®, Hong Kong) was used consisting of the same two questions as in the oral survey.
A total of 208 responses (133 patients and 75 relatives) were included. The median satisfaction score was 8. Despite this high score many suggestions for...
As you point out Root Cause Analysis will often fail with hospital adverse event (AE) data because it was not designed to deal with data arising in a complex system.1 The same can be said for Pareto analysis. Statistical process control (SPC) methods are often used to summarise AE data, particularly hospital infection data such as surgical site infections (SSIs) and bacteraemias.2 Standard SPC also frequently fails to summarise these complex data correctly.
Show MoreWith binary SSI data an approximate expected rate is frequently available so cumulative observed minus expected and CUSUM analysis are appropriate.2 However, the changing observed rate is not seen unless the numbers of procedures is large enough for them to be grouped by months or quarters. This is often infrequent. Even when such aggregation is possible difficulties arise as the number of procedures in each month may differ markedly. This problem can be dealt with, at least approximately, by employing a generalised additive model (GAM) analysis to the binary data that predicts the observed AE rate at various places in the time series.
Count and rate data such as bacteraemias or new isolates of an antibiotic-resistant organism will usually not have an expected rate available. These data are often grouped by months and a Shewhat chart used for their display. This chart requires a stable centre-line about which reliable control limits can be drawn. Often the mean value is used as the expected rate even though...
Vindrola-Padros and colleagues provide a helpful examination of co-production of quality improvement knowledge by university-based researchers in cooperation with members of service organizations. Another important type of embedded researcher consists of “fully embedded,” researchers, who are academically trained but employed by large care delivery systems. These individuals typically work in research units in the delivery systems. Their work is funded both by the systems themselves and by external, private and public organizations, such as the Agency for Healthcare Research and Quality (AHRQ). These fully embedded researchers contribute actively to national professional forums and journals and sometimes collaborate with embedded researchers in other systems.
AHRQ leverages relationships with fully embedded researchers because of their deep and nuanced knowledge of internal system data and operations. Health systems-based researchers’ ready access to care sites within which to test new approaches, and to data sources that permit rapid analysis of results of those tests, are of great value to AHRQ as we seek to find solutions to real-world problems in areas of national importance. AHRQ-supported work of this kind demonstrates the value of health delivery organizations becoming “learning health systems”(1) – using their own internal data and resources to drive quality improvement and sharing their findings with other organizations.
AHRQ’s collaboration w...
Show MoreThis study uses rigorous analysis to obtain important insights about the realtime information that our patients are handed at discharge. It is puzzling that the EMRs used were not named. One can infer from a look through the MSU website that they have both Cerner and Epic, but why is that necessary? The heart of quality/safety work is one of transparency balanced by humility, i.e. we shouldn't expect our IT systems to be any more perfect than we are, but they won't improve if we don't have more openness. The lack of scientific foundations and published post-marketing surveillance for our EHRs, especially the ascendant ones, was initially surprising. However, as they achieve complete market dominance, with less overt scientific review and public guidance and commentary, the silence is deafening. Is the BMJQS's failure to simply identify the names (or maybe I missed the citations) an oversight, or part of nondisclosure agreements with the vendors at the MSU institutions or at BMJQS?
To the Editor:
In this article, the authors propose that little evidence exists in healthcare to show that application of Highly Reliable Organization (HRO) principles has resulted in significant or sustained improvement in performance. Further, they attribute the problem partially to under- recognizing the role of habit in the process. While we fully agree that forming habitual behavior is essential to creating...
I read with interest the paper by Gillespie and Reader presenting the Healthcare Complaints Analysis Tool (HCAT) (1). The authors suggest that the HCAT could be used "as an alternative metric of success in meeting standards" and as a way "to benchmark units or regions". However, this makes the assumption that the volume and strength of complaints received is an accurate reflection of the standard of care being delivered....
To the Editor,
We have read with great interest the article by Schiff G D et al.,1 in which 6.1% of errors reported to the United States Pharmacopeia MEDMARX reporting system were classified as being related to the computerized prescription order entry (CPOE) system, representing the third most frequently reported errors in this notification system.
Similarly, in a study conducted in our hospital, appro...
Pages