rss

Recent eLetters

Displaying 1-10 letters out of 88 published

  1. What did Albert Einstein ever do?

    Reed and Card's essay on the problem of valuing action over thought could not have come at a better time. For years, quality and safety mavens have been paraphrasing Goethe -- "Knowing is not enough ... we must do". But the resulting culture of 'do, do, do' has brought us quite a lot of doo-doo.

    To counter this, consider the question, "What did Einstein ever do?" He invented nothing, patented nothing, created no research teams, built no institutions, presided over nothing. He published a few academic papers, but funding agencies today don't care about academic papers -- they want action, and so they get -- doo-doo.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  2. Patient-centered bedside rounds-Exploring patient preferences before patient-centered care

    Dear Editor,

    It was with great interest that we read the study of O'Leary et al published in the December issue of the journal and were quite surprised by their findings that patient centered- rounds had no impact on patients' perceptions of shared decision making, activation, and satisfaction with care.1

    Previous studies have shown that patients prefer their rounding team conduct rounds at the bedside 2-5and based on these studies, one would expect that if bedside rounds were conducted, patients would feel more satisfied with their care and be more engaged in medical decision making compared to other forms of rounding.

    The findings of this study do make us pause and reconsider some of our perceived beliefs regarding patient benefits from bedside rounds. The authors propose several explanations to support their findings. However, before abandoning patient-centered bedside rounding (PCBR), one must consider several potential issues that may make this study less generalizable. One explanation not explored might be the possibility of patients being asleep at that time of the day (7.30 am) and they may not have wanted or have been in a position to participate in PCBR. Additionally, the control group was cared for by a small team: one of the units was a cardiac type unit where patients were being admitted to initiate and monitor medications. It is possible that the small care team, type of patients on that unit, and the general structure of the control team was designed so patients may have had positive perceptions about their care, which might explain their findings. It is also possible that the language used by the team during PCBR after the initial few weeks of mentoring may have contained medical jargon or the script used to invite patient participation may not have been r conducive to patient engagement. These potential factors could have affected the perception by patients.

    An important critique of the study is that patients were not given a choice of whether they in fact wanted PCBR. In our experience of querying Veterans at our institution, while the majority state they would prefer PCBR "If they have something to say, I want to hear it", a sizable minority prefer NOT to have PCBR, as they are uncomfortable with medical uncertainty and hearing worst case scenarios "You're the doctor. You tell me". Perhaps a better approach would be to ask every patient upon admission if they prefer team rounds at the bedside or outside the room. The teams should ask the patient how much they would like to participate in their care, and whether they'd be comfortable if there were other patients in the hospital room, and then proceed accordingly. It is essential to explain what these rounds entail and possible roles the patient can assume. Once patients are provided this information they may be in better position to think through their choice of actively participating or not in becoming informed customers; they may be able to make the choice that best suits their preferences. Given the challenges of executing PCBR and O'Leary's findings perhaps PCBR should only be done on those that actually want them. As simplistic as it sounds, patients should actually be asked in a patient-centered manner whether they actually want patient centered bedside rounding.

    REFERENCES:

    1. O'Leary, KJ, et al, Effect of patient-centered bedside rounds on hospitalized patients' decision control, activation and satisfaction with care, BMJ Qual Saf 2015;0:1-8. doi:10.1136/bmjqs-2015-004561

    2. Wang-Cheng, R.M., et al., Bedside case presentations: why patients like them but learners don't. J Gen Intern Med, 1989. 4(4): p. 284-7

    3 Rogers, H.D., J.D. Carline, and D.S. Paauw, Examination room presentations in general internal medicine clinic: patients' and students' perceptions. Acad Med, 2003. 78(9): p. 945-9. 4. Gonzalo, J.D., et al., The return of bedside rounds: an educational intervention. J Gen Intern Med, 2010. 25(8): p. 792-8.

    5. Lehmann, L.S., et al., The effect of bedside case presentations on patients' perceptions of their medical care. N Engl J Med, 1997. 336(16): p. 1150-5.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  3. The problem with incident reporting

    Dear Sir or Madam

    I read with interest the editorial by Carl Macrae on incident reporting. I wonder if, in making a detailed comparison with the aviation and other industries, Macrae loses sight of one important reason why health services staff report incidents. My experience suggests that often the purpose of reports is not to learn from incidents but for staff to pre -emptively give their version of events in case punitive sanctions follow from an incident. Defensive actions and fear of blame seem to commonly drive reporting. In such circumstances biased reporting and telling your boss are understandable responses.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  4. Re:More information on safety culture in long-term care

    We appreciate Dr. Singer's point about a more thorough discussion of the large literature on safety climate and tools for assessing it. Although we did include two of the articles she refers to; not all were included. While acknowledging and discussing other instruments for measuring Patient Safety Climate (PSC), would have made our article more complete, the findings and conclusions of the study would not have changed. For instance, we would have still chosen the Safety Attitude Questionnaire (SAQ) to measure PSC in nursing and residential homes, as we aimed to benchmark our findings with other health care settings (inpatient, ICU, ambulatory care) in the Netherlands and abroad. The results for benchmarking constitute a substantial part of our findings and discussion. The SAQ is a frequently used survey in multiple healthcare settings and is often used as a foundation for other PSC surveys. Thus, we chose the SAQ so that our assessment of PSC in nursing and residential homes in The Netherlands would not stand in isolation, but could be considered in the context of international results. In conclusion, we should have discussed more recent literature on other possible PSC surveys and explained better why we chose the SAQ. But, we believe this oversight does not affect the substance of our findings--nor apparently does Dr Singer.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  5. More information on safety culture in long-term care

    To the Editor: I was a little surprised to see Buljac-Samardzic et al. in their recent article on safety culture in long-term care state that few tools are available to evaluate the effectiveness of initiatives to improve safety culture in nursing and residential homes. While there may be fewer tools available for nursing and residential homes than inpatient settings, there are several safety climate instruments that are worthy of note [1-5]. Additionally, the authors provide weak support for their reliance on their instrument of choice. To conclude that one survey is the best available general climate measure based on a review from 2005seems incomplete. While I agree with the authors that we need more instruments for measurement in nursing and residential homes and believe the authors selected a fine measure of safety climate, the authors do themselves and their study a disservice by not providing a more thorough acknowledgement of previous research.

    References

    1 Handler SM, Castle NG, Studenski SA, et al. Patient safety culture assessment in the nursing home. Quality and Safety in Health Care 2006; 15:400-4. doi:10.1136/qshc.2006.018408

    2 Hughes CM, Lapane KL. Nurses"and nursing assistants" perceptions of patient safety culture in nursing homes. Int J Qual Health Care 2006; 18:281-6. doi:10.1093/intqhc/mzl020

    3 Singer SJ, Kitch BT, Rao SR, et al. An Exploration of Safety Climate in Nursing Homes. J Patient Saf 2012; 8:104-24. doi:10.1097/PTS.0b013e31824badce

    4 Hartmann CW, Meterko M, Zhao S, et al.Validation of a novel safety climate instrument in VHA nursing homes. Medical Care Research and Review 2013; 70:400-17. doi:10.1177/1077558712474349

    5 Bonner AF, Castle NG, Perera S, et al. Patient Safety Culture: A Review of the Nursing Home Literature and Recommendations for Practice. Ann Longterm Care 2008; 16:18-22.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  6. Statistical analysis of differences in turnover times among operating theatres

    Overdyk et al. used remote video auditing with real-time feedback in a surgical suite [1]. As part of their randomized trial clustered by theatre, they report less turnover times among "fast rooms," those generally including 3 or greater cases per day.

    Successive turnover times between scheduled cases within theatres on the same date tend to be correlated (e.g., caused by same surgeon, nurses, and anaesthetist). This was shown in Dexter et al. 2005 and Austin et al. 2014 [2,3]. Overdyk et al.'s description of their statistical model seems not to describe consideration of correlations of turnover times within the same theatre on the same day. This can be rectified by including the theatre-day combination as a fixed or random effect. Alternatively, and typically, analyses take the simpler approach of batching (binning) by each day, week, 2 week, or 4-week period, and then comparing the periods (e.g., week) pairwise between control (i.e., no feedback) and intervention (i.e., feedback) theatres (e.g., Reference [3]). When the authors sort the turnover times in sequence of date, then theatre, and then start time, do the authors have statistically significant lag 1 correlation? If yes, then, when analyses are repeated by either including that correlation in the statistical model or by making comparison pairwise by week (or other suitable interval), what are the revised results of the authors' Table 2?

    Turnovers that are occurring simultaneously among theatres on the same day can be correlated if personnel are shared (e.g., housekeepers and anaesthesia technicians) [4,5]. For example, Wang et al. found most turnovers greater than 1 hour occurred at their studied surgical suite when there were >2 simultaneous turnovers [5]. Overdyk et al.'s description of their statistical model does not seem to describe consideration of correlations of turnover times on the same day and time among theatres. Although this can be rectified by including a fixed effect of the time varying number of simultaneous turnovers, usually analyses compensate by batching (binning) by each day, week, etc. [2-5]. Do the authors have significant correlation between numbers of simultaneous turnovers at each time and turnover times? If yes, then when analyses are repeated, what are the revised results of the authors' Table 2?

    References 1 Overdyk FJ, Dowling O, Newman S, et al. Remote video auditing with real- time feedback in an academic surgical suite improves safety and efficiency metrics: a cluster randomised study. BMJ Qual Saf 2015; PMID: 26658775 2 Dexter F, Epstein RH, Marcon E, et al. Estimating the incidence of prolonged turnover times and delays by time of day. Anesthesiology 2005;102:1242-8 3 Austin TM, Lam HV, Shin NS, et al. Elective change of surgeon during the OR day has an operationally negligible impact on turnover time. J Clin Anesth 2014;26:343-9 4 Dexter F, Marcon E, Aker J, et al. Numbers of simultaneous turnovers calculated from anesthesia or operating room information management system data. Anesth Analg 2009;109:900-5 5 Wang J, Dexter F, Yang K. A behavioral study of daily mean turnover times and first case of the day start tardiness. Anesth Analg 2013;116:1333-41

    Conflict of Interest:

    Financial disclosure: Arrowsight paid the University of Iowa's Department of Anesthesia for consulting by Dr. Dexter in 2012 (see http://www.FranklinDexter.net/FAQ.htm).

    Read all letters published for this article

    Submit response
  7. Public website for interrupted time series and segmented regression

    We agree with the authors that interrupted time series should be used more often (1). We also agree that the statistics are difficult. We find segmented regression to be the preferable form of interrupted time series (ITS) as traditional ITS with the Davies tests only looks for a change in slope at the breakpoint. This works well if there is not a simultaneous change or shift in the level of the outcome at the breakpoint; however, when both a change in slope and a shift in level occur, the Davies test is problematic. In addition to segmented regression, we use multivariable linear regression to detect secular trends in outcomes over time.

    In response to the difficulties, we have placed online at http://qitools.github.io/ a resource for using and teaching segmented regression. The website accepts data sets by pasting or uploading values.

    The underlying source code is written in R and is publicly available at GitHub (https://github.com/qitools/charts). In addition to being open-source, the code is implemented online at openCPU so users do not have to install R on their own computers. The combination of GitHub and openCPU allows for crowdsourcing improvements or alternative versions. We encourage other investigators to improve the source code at https://github.com/qitools/charts for implementation that we have started.

    References:

    1. Fretheim A, Tomic O. Statistical process control and interrupted time series: a golden opportunity for impact evaluation in quality improvement. BMJ Qual Saf. 2015 Dec;24(12):748-52. doi: 10.1136/bmjqs-2014-003756. PMID: 26316541

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  8. Response to 'Tall Man lettering and potential prescription errors: a time series analysis of 42 children's hospitals in the USA over 9 years' by Feudtner et al

    We were very interested to read the recent article by Feudtner et a,1 which has stated that Tall Man lettering has not changed the rate of look- alike sound-alike (LASA) related prescription or dispensing medication errors significantly in 42 children`s hospitals form 2004 to 2012. Feudtner et al`s study is a very valuable work because they performed an extensive statistical analysis on routine medication pairs of their hospital, and punctually discussed limitation of their results. It is well-documented that drugs whose names are spelled or sound similar may cause potentially dangerous medication errors. LASA errors are prevalent both in the hospital and in the outside the hospital but they are more dangerous in the latter, because the patients are not readily available.2 We have encountered frequent out-patient cases with LASA errors in our clinical practice in recent years including: 32 year old woman was prescribed Dilantin (phenytoin) for subarachnoid hemorrhage (SAH) but received Daonil (glibenclamide); A 35 year old woman was prescribed prednisone 5 mg for allergic disorder but prednisolone 50 was given instead; A 65 year old woman visited an internist for her digestive complications, she was administered "Digestive" tablets, but the pharmacy filled her prescription with digoxin. Unfortunately some of these errors undetected for several days to months and resulted to hospital admission. Various factors can increase the risk of LASA errors especially poor handwriting can be a potential cause of LASA errors, 3 therefore implementation of computerized physician order entry (CPOE) has decreased this type of errors4 and after implementation of CPOE we are not able to accurately conclude whether or not Tall Man lettering is an efficient way to reduce rate of LASA errors. Furthermore, none of single reported methods could prevent these errors effectively; therefore to decrease the risk of LASA errors a multidimensional and integrated method should be implemented. Some of these methods included appropriate nomination of new drugs with comprehensive statistical methods, using generic names of drugs in prescriptions, more advanced drug distribution systems, and educating patients, physicians, and pharmacists, CPOE, and Tall Man lettering.5 According to our clinical experiences and extensive literature references, we conclude that there is not still enough evidence to reject the effectiveness of Tall Man lettering strategy. For better estimation it is suggested to perform a comprehensive investigation and other intervening and important factors is considered.

    1. Zhong W, Feinstein JA, Patel NS, et al. Tall Man lettering and potential prescription errors: a time series analysis of 42 children's hospitals in the USA over 9 years.BMJ Qual Saf 2015 doi:10.1136/bmjqs-2015 -004562 [Published Online First: 3 November 2015] 2. Ciociano N, Bagnasco L: Look alike/sound alike drugs: a literature review on causes and solutions. Int J Clin Pharm 2014; 36:233-42. 3. Knudsen P, Herborg H, Mortensen AR, et al. Preventing medication errors in community pharmacy: root?cause analysis of transcription errors. Qual Saf Health Care 2007; 16(4): 285-90. 4. Hernandez F, Majoul E, Montes-Palacios C, et al. An Observational Study of the Impact of a Computerized Physician Order Entry System on the Rate of Medication Errors in an Orthopaedic Surgery Unit. PLoS One 2015; 10(7):e0134101. doi: 10.1371/journal.pone.0134101. eCollection 2015. 5. Ostini R, Roughead EE, Kirkpatrick CMJ, et al. Quality Use of Medicines - medication safety issues in naming; look-alike, sound-alike medicine names. International Journal of Pharmacy Practice 2012; 20: 349-57.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  9. Fundamental change in our approach to EHR design is needed

    The article by Koppel, The health information technology safety framework: building great structures on vast voids, 11/19/15, seen at http://m.qualitysafety.bmj.com/content/early/2015/11/19/bmjqs-2015-004746.full.pdf, describes an EHR environment that violates just about every principle of safe system design. It is no wonder that there continue to be significant safety issues with EHRs. "Most experts would agree that cornerstones of safety in any industry, and as pointed out in the IOM report, To Err is Human: Building a Safer Health System, and [by] many others, are simplicity, uniformity, and ease of use. Today's EHRs (of which there a hundreds of products on the market) as a whole are anything but simple, uniform and easy to use. . . . 'Requiring physicians to spend large amounts of time to operate EHR systems that are poorly designed, is a poor substitute for creating well-designed, safe, and easy-to-use EHR systems.'(1) It is stunning to me, that in a [2013] 40-minute talk on patient safety at one of the national organizations of neurosurgeons, Dr. Donald Berwick hardly mentioned HIT or the EHR."(2) I do not believe we will effectively address the patient safety issues inherent in today's EHR environment until government, large health systems, and/or organized medicine, ideally working in concert, create a fundamental change in our approach, i.e., standardized EHRs with open source code optimally licensed and governed so that end users can lead and control innovation. 1. Hirschtick RE, Electronic Records and Hospital Progress Notes, JAMA 2012;308:2337. 2. Wilder BL, The Politics of the EHR: Why we're not where we want to be and what we need to do to get there, 10/1/13, http://www.openhealthnews.com/articles/2013/politics-ehr-why-we're-not-where-we-want-be-and-what-we-need-do-get-there. (access 12/7/15).

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  10. An answer to the dilemma whether emergency department length of stay improves quality of care?

    Dear Editor,

    I commend Vermeulen et al for addressing a fundamental question: Is ED length of stay (ED LOS), a globally used key performance indicator, actually associated with improvement in quality of care[1]?

    Vermeulen et al set out to compare whether patients presenting with one of three acute conditions (high acuity asthma, upper arm/forearm/shoulder fracture and acute myocardial infarct) at hospitals with reduced ED LOS following the introduction of the Ontario Emergency Room Wait time strategy were also likely to experience improvements in other measures of quality of care; i.e. is evidence based treatment more likely given and if so, is it done in a timely fashion [1]?

    Interestingly, the study did not reveal an association between reduced ED LOS and improvement of other quality indicators, surprisingly not even for measures involving timely delivery of care [1]. Nevertheless, they did find that shift-level crowding is inversely associated with quality indicators related to timeliness of care: timeliness of reperfusion in AMI, splinting and analgesia in adult patients with fractures and steroid, bronchodilator within 60 minutes of presentation with acute asthma[1]. This supports prior studies reporting a correlation between ED crowding and increased short-term or in patient mortality [2, 3] and failure to administer timely care [4]. In my view, this study confirms that both quality initiatives and assessment of quality of care ought to be multidimensional and not focussed on one quality indicator. The association with shift-level crowding emphasises that we must concentrate on mapping trends in ED crowding over time to allow for appropriate ED staffing and institute systems that aid efficiency while assuring safety during those times. This may prove to be a more effective system to improve overall quality of care that encompasses safety, efficiency, timeliness and patient centredness [5] rather than focussing on reducing ED LOS in isolation, which appears to be a poor measure of quality of care[1]. Furthermore, reducing ED LOS equals time pressure, a known major contributing factor to error in human performance, which is likely to predominantly affect the diagnosis and treatment in complex patients due to performance degradation[6]. Finally, though it seems obvious that reduced ED LOS improves patient satisfaction, if it is associated with abrupt staff and contributes to human error[6], this is unlikely to be the case. I applaud Vermeulen et al for highlighting the critical issue with using one-dimensional measures to assess quality of healthcare.

    Word count: 394.

    References.

    1. Vermeulen, M.J., et al., Are reductions in emergency department length of stay associated with improvements in quality of care? A difference-in-differences analysis. BMJ Qual Saf, 2015. 2. Guttmann, A., M.J. Schull, and M.J. Vermeulen, Association between waiting times and short term mortality and hospital admission after departure from emergency department: population based cohort study from Ontario, Canada. BMJ, 2011. 342. 3. Sun, B., R. Hsia, and R. Weiss, Effect of emergency department crowding on outcomes of admitted patients. Ann Emerg Med, 2013. 61: p. 605-611. 4. Pines, J., J. Hollander, and A. Localio, The association between emergency department crowding and hospital performance on antibiotic timing for pneumonia and percutaneous intervention for myocardial infarction. Acad Emerg Med, 2006. 13: p. 873-878. 5. Pronovost, P., et al., How can clinicians measure safety and quality in acute care? Lancet, 2004. 363: p. 1061-1067. 6. Suzuki, T., T.L. Von Thaden, and W. Geibel Influence of time pressure on aircraft maintenance errors. 2008.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response

Free sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of BMJ Quality & Safety.
View free sample issue >>

Email alerts

Don't forget to sign up for content alerts so you keep up to date with all the articles as they are published.