We thank Dr. Iedema for highlighting that a gap exists in providers
having the skillset to 'work smarter.' We agree that novel approaches to
healthcare improvement are required that move beyond gadget-based
solutions and that require a new set of skills of providers and provider
organizations. The suggestion of video taping one's performance to review
how the system (and its participants) currently operates and reflect...
We thank Dr. Iedema for highlighting that a gap exists in providers
having the skillset to 'work smarter.' We agree that novel approaches to
healthcare improvement are required that move beyond gadget-based
solutions and that require a new set of skills of providers and provider
organizations. The suggestion of video taping one's performance to review
how the system (and its participants) currently operates and reflect on
how to (re-) design their workflows is intriguing. It exemplifies the
concept of 'exnovation' or 'innovation from within', meaning innovation
arises from within established practice, and from within practitioners.
However, our article did not aim to imply that it is providers who
are responsible for, or required to gain the skills to work smarter. Our
message is directed to all those seeking and driving healthcare system
improvement. Although we agree that providers may benefit from the
skillsets that Dr. Iedema proposed, we believe that those seeking change
also need additional skillsets and perspectives. We can no longer presume
that healthcare providers have the space to add new tasks, workflows,
procedures etc. We have, as a system, to work on simplifying the current
work environment, finding non-value added tasks and work with healthcare
providers to design ways of achieving improved outcomes that don't add net
new workload or complexity. Now some may argue that added work at one
part of the system may have larger benefits downstream. This may be true
but those charged with carrying the weight of the new tasks have to do so
in a sustainable and reliable way. Otherwise subsequent change
initiatives will disrupt this balance and its downstream benefits.
Our message was also aimed at those adding new regulation, policies,
performance measures and incentives or disincentives. Adding pressure on
top of an environment that doesn't have the space nor the knowledge and
skill to create it only adds to workplace burden, resistance and non-
sustained improvement. We believe that there needs to be a system-wide
look at the capabilities and investments required to create a 'working
smarter' healthcare system. Providers will play their role but they need
a commitment that a 'work harder' strategy is no longer acceptable.
The Hayes, Batalden and Goldmann piece is an important contribution
to the debate about what exactly is practice improvement. Most practice
improvement thinking is anchored in the 'innovation' paradigm, and this
paradigm is predominantly 'gadget thinking'. Others' solutions are to be
adopted here because they produce great outcomes elsewhere. Except now we
have to figure out how we can get the gadget to work.
Few commen...
The Hayes, Batalden and Goldmann piece is an important contribution
to the debate about what exactly is practice improvement. Most practice
improvement thinking is anchored in the 'innovation' paradigm, and this
paradigm is predominantly 'gadget thinking'. Others' solutions are to be
adopted here because they produce great outcomes elsewhere. Except now we
have to figure out how we can get the gadget to work.
Few commentators have been game to shift towards acknowledging that care
practices are now too complex for 'gadget thinking'. Hayes and colleagues
are an exception. They propose that frontline professionals themselves
need to become smarter at 'co-designing' solutions that suit their unique
contexts and practices. Here, we are not talking about adopting new
gadgets from elsewhere. We care talking about people who will - and who
have the skill to - take inspiration from the smartness that may be
invested in whatever gadget or improvement initiative, and apply this
smartness to their own workpractices. Indeed, these professionals may not
even need inspiration to come from elsewhere: they may well be motivated
by issues arising in their own work, and decide to redesign their
practices.
But to date, we have not focused on what this ability to co-design care
practices consists in. We expect frontline professionals to somehow know
how to co-design practice, and know how to be smart about what they do and
what they should do. And yet, their training has not skilled them in
practice design. We nevertheless expect them to readily (re)design the
organisational dimensions of their work. Usually, such designs fall prey
to people's espoused ideas and pre-existing assumptions about how things
work and should work. Often there are worrying gaps between what people
know and what they (think they) do. Put differently, smartness, in the
sense of learning about how to manage complex situations and improve
complex practices, is rare.
Smartness cannot be expected to exist or arise in situations where there
are no resources available for professionals to learn about (or 'make
explicit') the complexities of their own day-to-day work. Smartness must
be nurtured.
The way par excellence to achieve this is professionals, just as do top-
end athletes, studying their own performances. In sport, video-ing one's
game for transforming good performances into excellent ones is now not
just common but also indispensable. This is about capitalising on and
building on existing strengths. By analogy, video-ing in situ practice and
using the resulting footage to reflect on the work is central to enhancing
smartness at work. This is what Katherine Carroll and Jessica Mesman and
colleagues have referred to as 'exnovation'.
Of course, many excuses and objections are raised to auto-observation, the
most common ones of which are privacy, the Hawthorne effect, and
subpoenable evidence. But these concerns are over-stated, and they trade a
real need and opportunity for improvement and smartness off against
maintaining the status quo. Without auto-observation, existing habits and
routines will go on unquestioned. Work can only become harder, as the only
solutions to improvement will remain gadget-based. Smartness, by contrast,
starts from where we are, and explores where we can go.
Dear Sir,
It is with great interest that we read the recent publication by Thomas and colleagues investigating ward-based patient care.1 They describe a study in which 28 medical students were randomised to either control (no intervention) or intervention (performance feedback and error management training) groups, performing simulated ward rounds complicated by environmental distractors. Significant reductions in errors were se...
Dear Sir,
It is with great interest that we read the recent publication by Thomas and colleagues investigating ward-based patient care.1 They describe a study in which 28 medical students were randomised to either control (no intervention) or intervention (performance feedback and error management training) groups, performing simulated ward rounds complicated by environmental distractors. Significant reductions in errors were seen in both groups from the first to the second ward round, with a significantly greater reduction seen in the intervention group.
We thoroughly commend on their efforts to add to the body of literature for what is such a crucial, but until now has been a sparsely investigated, area of care. There can be no doubt that in current practice, the conduct of ward rounds may be hugely variable,2, 3 with significant implications for patient outcomes.2 In surgical literature, the phenomenon of "failure to rescue" describes failures in ward-based management of complications, which represent a major source of variability in surgical outcomes, emphasising the need to focus on ward rounds to improve outcomes.4
Future research in this area must be robust, evidence-based, and ideally tied to clinically relevant subjects and outcomes. With this in mind, we would like to raise several questions in reference to the study by Thomas et al. How were the "distractors" selected? Loud radio noises and upset relatives would appear to represent fairly arbitrary factors with unclear relevance to clinical care. Additionally, the authors appear to suggest that the intervention included very specific feedback on how to cope with these distractors - if part of the scoring is to assess whether the radio was turned off, and the intervention includes instruction to do so, can the result be truly deemed valid? Finally, was there a reason for selecting medical students rather than a more valid population of clinical staff such as house officers, or even residents, who are commonly responsible for the ward round?
Recently, we have described the Surgical Ward-care Assessment Tool (SWAT), a checklist-based tool for technical skills, and the Ward-based Non-Technical Skills score (W-NOTECHS), a Likert-based tool for non-technical skills; together these represent objective, validated scoring scales for ward round performance.5 It is possible that the adaptation of such surgical rating scales to address other specialty populations may present an effective way forward. We are fully in agreement with Thomas and colleagues in their statement that to move ward round initiatives forward, we must in future focus on changing patient safety behaviours. Thus, future assessments of ward round performance must focus on objective assessment metrics which are generalisable across studies, contexts, and specialties. Only in this manner can reliable, reproducible interventions be developed to standardise and improve care and outcomes.
References
1. Thomas I, Nicol L, Regan L, et al. Driven to distraction: a prospective controlled study of a simulated ward round experience to improve patient safety teaching for medical students. BMJ Qual Saf 2014.
2. Pucher PH, Aggarwal R, Darzi A. Surgical ward round quality and impact on variable patient outcomes. Ann Surg 2014; 259:222-6.
3. Blucher KM, Dal Pra SE, Hogan J, Wysocki AP. Ward safety checklist in the acute surgical unit. ANZ J Surg 2013.
4. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care 1992; 30:615-29.
5. Pucher PH, Aggarwal R, Srisatkunam T, Darzi A. Validation of the Simulated Ward Environment for Assessment of Ward-Based Surgical Care. Ann Surg 2014; 259:215-21.
Conflict of Interest:
Rajesh Aggarwal is a consultant for Applied Medical. No other competing interests declared.
Recently, Provenzano and colleagues found that an electronic tool
collecting real-time clinical information directly from front-line
providers was both feasible and useful to evaluate inpatient deaths [1].
These findings concur with our evaluation of the preventability of death
using a simple electronic evaluation tool in our 46-bed adult Intensive
Care Unit.
Recently, Provenzano and colleagues found that an electronic tool
collecting real-time clinical information directly from front-line
providers was both feasible and useful to evaluate inpatient deaths [1].
These findings concur with our evaluation of the preventability of death
using a simple electronic evaluation tool in our 46-bed adult Intensive
Care Unit.
From September 2010 to September 2011 an email was send to the
attending intensivist each time a patient died in our intensive care
including 2 questions: "Was this death preventable? If yes, what was the
cause of preventability?". The definition of preventable mortality was
provided using three criteria: the illness was survivable, care was
suboptimal, and suboptimal care was related to death. No reminding emails
were sent. In addition, the patient charts of all cases were
retrospectively reviewed by two ICU nurses and a physician.
A total of 306 patients (9.9%) died. APACHE IV Standardised Mortality
Rate was 0.77. In 48 of these deceased patients the APACHE IV based
mortality risk was below 20%. Response rate was 92% and 47 deaths (15%)
were reported to be potentially preventable. Large inter-individual
variations between the intensivists (n=24) were observed. Response varied
between 65% and 100% and preventable death judgments varied from none to
66%. When using blinded chart review was by the nurses and physician
judged death potentially preventable in 7%, 11%, and 18%, respectively.
Alike Provenzano et al. we also found poor agreement between the
preventability ratings from front-line intensivist reviews when compared
to blinded chart review [2]. In 21 cases (45%) in which the intensivist
scored a preventable death all three reviewers scored these non-
preventable. This might partly be explained by additional information on
each patient's individual circumstances that cannot easily be deduced from
patients' charts. Using APACHE IV as selection criterion for in-depth
evaluation is insufficient while analysis of patients with an APACHE IV
based risk of mortality below 20% showed that only 4 of these deaths
(8.3%) were considered potentially preventable [3].
Preventability of death evaluation of all inpatient deaths is
required either for quality improvement and/or by regulatory authorities.
A quick and efficient method with high response rates from front-line
providers is feasible and may provide useful information for quality
improvement [4]. However, large inter-individual variations in response
and judgment exist and, therefore, this method apparently is insufficient
for benchmarking.
References:
1. Provenzano A, Rohan S, Trevejo E, et al. Evaluating inpatient
mortality: a new electronic review process that gathers information from
front-line providers. BMJ Qual Saf 2015;24:31-37.
2. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors:
preventability is in the eye of the reviewer. JAMA 2001;286:415-20.
3. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality
is a poor proxy for preventable mortality: a moddeling study. BMJ Qual Saf
2012;21:1052-1056.
4. Dijkema LM, Dieperink W, van Meurs M, et al. Preventable mortality
evaluation in the ICU. Crit Care 2012;16:309.
I have recently returned from the Association of Simulated Practice in Healthcare 2014 conference in Nottingham and whilst there was privileged to hear and meet Professor Erik Hollnagel. He presented eloquently on his work relating to “From Safety I to Safety II” [1] which provided an excellent opening for the conference’s theme of “Changing Behaviours.” His work sparked much debate and reflection, part...
I have recently returned from the Association of Simulated Practice in Healthcare 2014 conference in Nottingham and whilst there was privileged to hear and meet Professor Erik Hollnagel. He presented eloquently on his work relating to “From Safety I to Safety II” [1] which provided an excellent opening for the conference’s theme of “Changing Behaviours.” His work sparked much debate and reflection, particularly by myself when presenting our simulation work related to the Duty of Candour. We opened with a discussion considering how the NHS was perceived by the general population of the UK. The conversation moved to the role of the media in driving the campaign for patient safety and openness.
The media has embraced the reports of a small number of high profile failings in the NHS, with the now daily reporting of another “failure” or “cover-up”. It is therefore understandable why a large proportion of the population do not trust the NHS and feel there is a closed and dishonest culture [2]. The media focuses on the Safety I premise of failures [1]. This is driving the destruction of the NHS’s reputation and the wellbeing of staff and patients by focusing on the minority of outcomes which are negative. In November 2013, our local Trust was reported to be the second worst general hospital in England for avoidable deaths [3]. A review of the data and response from the Trust identified that the news report was misleading and the data inaccurate, causing unnecessary anxiety amongst patients and staff [4]. Such media reports place extra strain on the healthcare system with reputational damage and effects on morale which effect the ability of that organisation to sustain required operations.
However, now 12 years later, the focus still remains on the serious errors, incidents and failures of the NHS. These events are still the minority of events, but the focus remains on what went wrong. As it is time for healthcare to focus on Safety II, should it not be the same for the media? By focussing on what goes right and the NHS's incredible ability to succeed under varying conditions, the media can celebrate the NHS and help to drive the next stage of safety improvement. It is time for
the media to also move from Safety I to Safety II thinking.
The discussion regarding media involvement in the NHS prompted me to consider this further and I read with great interest the 2002 paper published in BMJ Quality and Safety considering the role of the media in pushing patient safety forward as the priority [5]. There is no doubt that media involvement has benefitted the patient safety agenda, by acting as a “watchdog” to hold the medical profession accountable for improved safety and quality of care. This in turn has created a passionate group of healthcare professionals striving for excellence in care.
12 years later, however, the focus still remains on the serious errors, incidents and failures of the NHS. These events are still the minority of events, but the focus remains on what went wrong. As it is time for healthcare to focus on Safety II, should it not be the same for the media? By focussing on what goes right and the NHS’s incredible ability to succeed under varying conditions [1], the media can celebrate the NHS and help to drive the next stage of safety improvement. It is time for the media to also move from Safety I to Safety II thinking.
The difficulty will be in convincing the media of its role in the next stage of safety. It remains important for the NHS to be transparent, but a balance must be sought between the ongoing need for accurate reporting of serious problems and celebration of the NHS’s staff and its successes. In a recent well known report on health and healthcare service delivery [6], the UK ranked number one against ten other wealthy countries for overall healthcare (based on quality, access, efficiency and equity).
Professor Hollnagel defined resilience as the ability of the healthcare system to adjust its functioning to sustain operations under both expected and unexpected conditions [1]. The media must understand the complexity of the NHS and be aware of the potential for their reporting to inadvertently remove those parts of the healthcare system that have contributed to its resilience.
1) Hollnagel E. Safety I and Safety II: The Past and Future of Safety
Management. Ashgate: Surrey, United Kingdom
2) YouGov UK. One in two don’t trust the NHS. [Online] 2013. Available from: https://yougov.co.uk/news/2013/06/13/1-2-do-not-trust-nhs/ [Accessed 14th November 2014].
3) Adams S. How 3,500 hospital patients lost their lives due to surgical errors or staff who were too busy to treat them... in just TWELVE months. The Mail on Sunday. [Online] November 09 2013. Available from: www.dailymail.co.uk [Accessed 14th November 2014].
4) Nottingham University Hospitals NHS Trust. Response to Mail On Sunday coverage (avoidable deaths). [Online: media response] 2013. Available from: http://www.nuh.nhs.uk/media/1459425/response_to_mail_on_sunday_coverage.pdf [Accessed 14th November 2014].
5) Millenson ML. Pushing the profession: how the news media turned patient safety into a priority. Qual Saf Health Care 2002; 11: 57–63.
6) Davis K, Stremikis K, Schoen C, Squires D. Mirror, Mirror on the Wall, 2014 Update: How the U.S. Health Care System Compares Internationally. The Commonwealth Fund. 2014.
Dear Sir,
We read with interest the article by Schmidt et al. We applaud the authors for undertaking this large and complex study and for highlighting the great potential of newer technologies to improve patient care.
We hoped the authors could clarify some key issues. Firstly only one year's mortality data are used as a baseline comparator. Mortality fluctuates by year as this paper highlights, and can be affected by a large...
Dear Sir,
We read with interest the article by Schmidt et al. We applaud the authors for undertaking this large and complex study and for highlighting the great potential of newer technologies to improve patient care.
We hoped the authors could clarify some key issues. Firstly only one year's mortality data are used as a baseline comparator. Mortality fluctuates by year as this paper highlights, and can be affected by a large number of factors including how it is expressed (1). It is possible that the year chosen may have been an outlier that triggered the Trusts to actively invest in measures including EPSS. We would therefore be grateful if the authors could provide additional data on mortality in the years prior to the intervention. Were other strategies employed alongside EPSS? For example we understand University Hospital Coventry also called in Dr Foster Intelligence in 2007 to restructure practice (2).
As the paper uses only a historical comparator it is possible that a proportion of the improvement reflects the general national improvement in hospital mortality seen over the last decade (3). Do the authors have any data comparing their improvements with other Trusts of a similar size, case-mix, and similarly average HSMR (4)?
Interventions in healthcare are rarely without some adverse effects and as such we would be interested in any data collected on the potential negative aspects. These would include consequences of the increased workload for junior doctors and financial effect cost. Establishing that these were relatively minor would be very reassuring for other Trusts considering similar strategies.
While we agree that randomised controlled trials are complex, we suggest there is a strong rationale for them to disaggregate the benefit of EPSS from many confounding factors, and to inform clear health economic analysis.
Yours sincerely,
Dominick Shaw, John Blakey and Jamie Rylance
1 http://www.nejm.org/doi/full/10.1056/NEJMsa1006396#t=articleMethods
2 http://drfosterintelligence.co.uk/wp-content/uploads/2013/02/University-Hospitals-Coventry-Warwickshire-NHS-Trust-case-study.pdf
3 http://www.biomedcentral.com/1472-6963/13/216
4 http://drfosterintelligence.co.uk/wp-content/uploads/2011/11/Hospital_Guide_2011.pdf
We wish to congratulate Russ SJ et al. (1) for their excellent survey
investigating patients' views of the WHO safer surgery checklist.
The authors point out that the UK wide implementation of the
checklist has encountered some difficulties. Specifically, barriers
including checklist fatigue and difficulties in assembling the theatre
team are mentioned. Whilst we certainly agree with this, we wish to amend
the a...
We wish to congratulate Russ SJ et al. (1) for their excellent survey
investigating patients' views of the WHO safer surgery checklist.
The authors point out that the UK wide implementation of the
checklist has encountered some difficulties. Specifically, barriers
including checklist fatigue and difficulties in assembling the theatre
team are mentioned. Whilst we certainly agree with this, we wish to amend
the authors' catalogue of concerns by sharing our experience at Queen
Alexandra Hospital (QAH).
At QAH we operate a modified WHO safer surgery checklist to suit
local practice. The checklist is applied to every patient passing through
the theatre complex. During a routine audit we identified how an
apparently minor communication error fundamentally undermined the
checklist's safety function and placed our patients at risk.
Our venous thromboembolism (VTE) prophylaxis checkpoint reads 'VTE
prophylaxis considered?'. In practice however, this question is frequently
altered to 'Flowtron's on?' (Flowtron refers to the intermittent pneumatic
calf compression devices (IPCCD) used at QAH). The multiple meanings of
the word 'on' (either interpreted as 'on the patient' or 'switched on')
introduced ambiguity and a communication error. This incorrect use of the
checklist resulted in multiple patients having IPCCDs applied to their
calfs, yet the devices were never switched on and our patients were placed
at risk.
Our experience illustrates two important communication errors that
may undermine the checklist's safety function. Firstly, accurate and
unambiguous wording of each component of the checklist is essential. Words
with homonymous meanings should be avoided where possible. Secondly, each
checklist question must be verbalised accurately during the patient check
to avoid introducing errors.
The original WHO safer surgery checklist (2009) (2) limits such
potential error, as most questions are yes/no answerable. Any local
checklist modifications should aim to maintain this format. Introducing
words with homonymous meanings may lead to communication errors; undermine
the checklist's safety function and place patients at risk.
Reference:
1. BMJ Qual Saf. 2014 Jul 18. The WHO surgical safety checklist:
survey of patients' views. Russ SJ, Rout S, Caris J, Moorthy K, Mayer E,
Darzi A, Sevdalis N, Vincent C.
The authors (1) have raised a very important issue relating to
recognition and management of a deteriorating patient. Over the years,
cases have been reported where outcome may have been better if
deterioration was recognized in time. Once recognized, an urgent response
by a qualified team could instigate immediate investigations and
management as warranted, possibly averting a poor outcome.
The authors (1) have raised a very important issue relating to
recognition and management of a deteriorating patient. Over the years,
cases have been reported where outcome may have been better if
deterioration was recognized in time. Once recognized, an urgent response
by a qualified team could instigate immediate investigations and
management as warranted, possibly averting a poor outcome.
Code blue calls or cardiac arrest teams (2) were first introduced in
1970, with the motive of initiating an urgent response to a deteriorating
patient. By definition, activation of this system occurred after an arrest
had occurred, so patient had no recordable pulse, blood pressure,
respiration and did not respond to noxious stimuli.
However, more gains were to be made by initiating this response
before the patient had reached a terminal stage. Based on research showing
that cardiac arrest usually follows a series of events, attempts were made
to identify these events so as to preempt an arrest before it actually
occurred. Medical emergency teams (MET) were a culmination of these
efforts.
MET responses, introduced circa 2000 include a critical care
registrar and nurse, among others. Any clinician caring for a
deteriorating patient is encouraged to activate the response though a
rapid response system and can expect help within minutes. Whilst the
concept of MET response is similar to that of cardiac arrest teams, a
fundamental difference is in the timing of initiating the response.
However, the MET response is also activated after a level of
deterioration has occurred. The quest continued to find alarm signs or
signals that indicate deterioration is likely to occur. Once again, the
presumption is that an earlier response, before deterioration has
occurred, should result in a better outcome.
Analysis of hospital admissions suggests an adverse outcome is likely
in about 10% of admitted patients (3). Improving the outcome further,
particularly for these 10%, has triggered a nationally coordinated
approach that is being overseen by the Australian Commission on Safety and
Quality in Health Care (ACSQHC).
A new paradigm as suggested by Jones et al (4) would be required to
drive this further improvement. The focus is now on early detection and
prediction of clinical deterioration, so urgent help can be sought even
before the situation actually worsens. Eight essential elements have been
identified and compiled into a package that is the effort of ACSQHC.
Despite differences, it was encouraging that this consensus statement was
ratified by all state health ministers in Australia (5). The package,
widely distributed throughout Australian hospitals, is hoped to improve
outcomes by encouraging early detection of deterioration, and calling for
help early.
These strategies, in addition to the "swimming between the flags"
observation chart and rapid response systems include many other
initiatives with focus on education as one of the essential elements.
Different educational programs and packages such as COMPASS and DETECT (5)
have been developed in Australia specifically to improve practice
regarding the recognition and response to clinical deterioration amongst
all staff.
References:
1. Hughes C, Pain C, Braithwaite J, Hillman K. 'Between the flags':
implementing a rapid response system at scale. BMJ Qual Saf 2014;23:714-
717
2. McGrath RB. In-hospital cardiopulmonary resuscitation -- after a
quarter of a century. Ann Emerg Med 1987; 16: 1365-1368.
3. Runciman W and Moller J. Iatrogenic Injury in Australia, A Report
prepared by the Australian Patient Safety Foundation for the National
Health Priorities and Quality Branch of the Department of Health and Aged
Care of the Commonwealth of Australia (2001) available from:
http://www.apsf.net.au/dbfiles/Iatrogenic_Injury.pdf (accessed September
2014)
4. Jones AD, Dunbar NJ and Bellomo R. Clinical deterioration in
hospital inpatients: the need for another paradigm shift. Med J Aust 2012;
196 (2): 97-100
5. Australian Commission on Safety and Quality in Health Care.
National consensus statement: essential elements for recognising and
responding to clinical deterioration. Sydney: ACSQHC, 2010. Available
from: http://www.safetyandquality.gov.au/wp-content/uploads/2012/02/Nat-
Consensus-Statement-PDF-Complete-Guide.pdf (accessed Sept 2014)
Dear Editor,
we would like to congratulate Russ et al. on their paper on the patients'
views of surgical checklists (SC). In their elegant work, the above
authors underlined that assessing the fidelity of the SC remains a
challenge, but demonstrated a high level of patient support for use of
checklists. They found that patients were surprised that SC was only a
recent introduction to surgical care. Moreover, the authors...
Dear Editor,
we would like to congratulate Russ et al. on their paper on the patients'
views of surgical checklists (SC). In their elegant work, the above
authors underlined that assessing the fidelity of the SC remains a
challenge, but demonstrated a high level of patient support for use of
checklists. They found that patients were surprised that SC was only a
recent introduction to surgical care. Moreover, the authors stressed that
the majority of patients agreed that they would like the SC to be used if
they were having an operation.
In our experience, we confirm that the value of SC does not lie in the so-
called Hawthorne effect, but in changing (improving!) the mental model. As
also documented in the field of aviation, most accidents tend to involve
non-technical skills (NTS) such as communications, leadership, conflict,
and flawed decision-making. The relationship between NTS and human error
has been extensively demonstrated.
In Aviation it is mandatory for pilots to read a checklist for every single
phase of flight.
Of course they know the checklists by heart, but what if....you are
stressed, the last leg of the day, distracted, with family problems?
Of course it may be that you don't need any checklist, but will you risk it?
Will you risk to take off from Toronto under snow knowing that pilots
didn't read any checklist because they know procedures by heart and
because statistics say it doesn't matter, and results are the same..?
We would like conclude that one of the effective barriers to error is
the surgery safety checklist and, believe it or not, we are sure that
pilots if going under surgery they would like to know that surgeon uses
appropriate checklists that day!
____________________________________________________________
Fabrizio Dal Moro is an Assistant Professor at the University of Padova,
expert on NTS.
Gianluigi Zanovello and Fabio Cassan are airline pilots in Italy:
Zanovello is a former "Frecce Tricolori" (italian acrobatic team) leader;
Cassan was fighter squadron commander (51' Stormo Aeronautica Militare
Italiana).
They teach at Practice simulation center in the medicine University
of Verona - Italy. There, surgeons can practice exactly like the pilots
and run-through not only the anatomy before the real procedure. There is
something else: get familiar with NTS and understand that communication,
decision making, teamwork, situation awareness are important as the
professional, and technical.
The editorial from Sheikh, Atun, and Bates is welcome in flagging up
a key issue in the context of England and the US. However, it is not a
new issue, and it is disappointing that they do not acknowledge prior and
concurrent work.
The need for, and challenges impeding, evaluation of health
information systems have been flagged up much earlier, e.g. Rigby 1999;
2001. Both the European Federation for Medical In...
The editorial from Sheikh, Atun, and Bates is welcome in flagging up
a key issue in the context of England and the US. However, it is not a
new issue, and it is disappointing that they do not acknowledge prior and
concurrent work.
The need for, and challenges impeding, evaluation of health
information systems have been flagged up much earlier, e.g. Rigby 1999;
2001. Both the European Federation for Medical Informatics (EFMI) and the
International Medical Informatics Association (IMIA) have groups which
have followed up this theme. Ammenwerth instigated a European workshop
which inspired a significant work programme (Ammenwerth et al, 2004), and
led to production of reporting standards adopted by the EQUATOR network
(Talmon et al, 2009)and guidelines (Nyk?nen et al, 2011) which have been
fully elaborated (Brender et al, 2013).
The specific dual challenges behind the editorial by Sheikh, Atun and
Bates are the penchant for politicians to decree policy outside their
technical knowledge in order to appear progressive, and the generic need
for evidence-based policy in health informatics. This latter too has
recently been addressed - generically by a dedicated edition of the IMIA
Year Book (S?rousi et al, 2013) which included a summary of the concerted
actions of a decade (Rigby et al, 2013); and in the context of developing
countries by WHO (2011) and through a joint WHO-IMIA Programme (IMIA,
2012).
Moving to evidence-based health informatics policy is vital for
effectiveness, efficiency, safety, and enhanced health care delivery and
outcomes. Such an approach faces challenges as it cuts across the
perceived autonomy of politicians, and the worrying scant regard for a
scientific evidence base of some sectors of the supplier industry, while
evaluation to produce the evidence continually faces impediments as
described. It is therefore vitally important that all innovators and
activists work collaboratively to progress the issues.
Michael Rigby
Rigby M (1999) Health Informatics as a Tool to Improve Quality in Non
-acute Care - New Opportunities and a Matching Need for a New Evaluation
Paradigm; International Journal of Medical Informatics, 56, 1999, 141-150.
Rigby M (2001.)Evaluation: 16 Powerful Reasons Why Not to Do It - And
6 Over-Riding Imperatives; in Patel V, Rogers R, Haux R (eds.): Medinfo
2001: Proceedings of the 10th. World Congress on Medical Informatics, IOS
Press, Amsterdam, 2001, 1198-1202.
Ammenwerth E et al (2004). Visions and strategies to improve
evaluation of health information systems: Reflections and lessons based on
the HIS-EVAL workshop in Innsbruck. International Journal of Medical
Informatics, 2004 Jun 30; 73(6):479-91.
Talmon J. et al (2009. STARE-HI - Statement on Reporting of
Evaluation Studies in Health Informatics; International Journal of Medical
Informatics; 78 2009, 1, 1-9.
Nyk?nen P. (2011). Guideline for good evaluation practice in health
informatics (GEP-HI); International Journal of Medical Informatics, 80,
815-827, 2011.
Brender J (2013). STARE-HI - Statement on Reporting of Evaluation
Studies in Health Informatics: explanation and elaboration. Applied
Clinical Informatics, 2013; 4: 331-358.
S?rousi B, Jaulent M-C, Lehmann CU (eds.) (2013). Evidence-based
Health Informatics - IMIA Yearbook of Medical Informatics 2013; 34-46,
Schattauer, Stuttgart, 2013.
Rigby M. et al (2013). Evidence Based Health Informatics: 10 Years of
Efforts to Promote the Principle - Joint Contribution of IMIA WG EVAL and
EFMI WG EVAL; in S?rousi B, Jaulent M-C, Lehmann CU. Evidence-based Health
Informatics - IMIA Yearbook of Medical Informatics 2013; 34-46,
Schattauer, Stuttgart, 2013.
WHO(2011)Call to Action on Global eHealth Evaluation - Consensus
Statement of the WHO Global eHealth Evaluation Meeting, Bellagio,
September 2011; available from http://www.healthunbound.org/content/call-
action-global-ehealth-evaluation
We thank Dr. Iedema for highlighting that a gap exists in providers having the skillset to 'work smarter.' We agree that novel approaches to healthcare improvement are required that move beyond gadget-based solutions and that require a new set of skills of providers and provider organizations. The suggestion of video taping one's performance to review how the system (and its participants) currently operates and reflect...
The Hayes, Batalden and Goldmann piece is an important contribution to the debate about what exactly is practice improvement. Most practice improvement thinking is anchored in the 'innovation' paradigm, and this paradigm is predominantly 'gadget thinking'. Others' solutions are to be adopted here because they produce great outcomes elsewhere. Except now we have to figure out how we can get the gadget to work. Few commen...
Dear Editor,
Recently, Provenzano and colleagues found that an electronic tool collecting real-time clinical information directly from front-line providers was both feasible and useful to evaluate inpatient deaths [1]. These findings concur with our evaluation of the preventability of death using a simple electronic evaluation tool in our 46-bed adult Intensive Care Unit.
From September 2010 to Sept...
To the Editor
I have recently returned from the Association of Simulated Practice in Healthcare 2014 conference in Nottingham and whilst there was privileged to hear and meet Professor Erik Hollnagel. He presented eloquently on his work relating to “From Safety I to Safety II” [1] which provided an excellent opening for the conference’s theme of “Changing Behaviours.” His work sparked much debate and reflection, part...
We wish to congratulate Russ SJ et al. (1) for their excellent survey investigating patients' views of the WHO safer surgery checklist.
The authors point out that the UK wide implementation of the checklist has encountered some difficulties. Specifically, barriers including checklist fatigue and difficulties in assembling the theatre team are mentioned. Whilst we certainly agree with this, we wish to amend the a...
The authors (1) have raised a very important issue relating to recognition and management of a deteriorating patient. Over the years, cases have been reported where outcome may have been better if deterioration was recognized in time. Once recognized, an urgent response by a qualified team could instigate immediate investigations and management as warranted, possibly averting a poor outcome.
Code blue calls or...
Dear Editor, we would like to congratulate Russ et al. on their paper on the patients' views of surgical checklists (SC). In their elegant work, the above authors underlined that assessing the fidelity of the SC remains a challenge, but demonstrated a high level of patient support for use of checklists. They found that patients were surprised that SC was only a recent introduction to surgical care. Moreover, the authors...
The editorial from Sheikh, Atun, and Bates is welcome in flagging up a key issue in the context of England and the US. However, it is not a new issue, and it is disappointing that they do not acknowledge prior and concurrent work.
The need for, and challenges impeding, evaluation of health information systems have been flagged up much earlier, e.g. Rigby 1999; 2001. Both the European Federation for Medical In...
Pages