Article Text

Download PDFPDF

Connecting simulation and quality improvement: how can healthcare simulation really improve patient care?
Free
  1. Victoria Brazil1,
  2. Eve Isabelle Purdy2,
  3. Komal Bajaj3
  1. 1 Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland, Australia
  2. 2 Emergency Medicine, Queen’s University, Kingston, Ontario, Canada
  3. 3 Clinical Quality, NYC Health + Hospitals/Jacobi, New York, New York, USA
  1. Correspondence to Dr Victoria Brazil, Bond University Faculty of Health Sciences and Medicine, Gold Coast, QS 4229, Australia; vbrazil{at}bond.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Simulation has an established role in the education and training of healthcare professionals, but its function as a healthcare quality improvement (QI) tool is more emergent. In this edition of the journal, Ajmi and colleagues report on a simulation-based intervention that improved door-to-needle times and patient outcomes in acute ischaemic stroke.1 This prompts reflection on the positioning of simulation-based methods within QI programmes, the role of trained simulation experts as part of QI-focused teams and the directions for future scholarly enquiry that supports integration of these fields.

The improvement report by Ajmi et al is a comprehensive and thoughtful example among many reports of simulation-based interventions to improve care processes and patient outcomes. Improved time-based targets in trauma,2 stroke and cardiac care are frequently cited in the literature, as are better resuscitation outcomes3 and compliance with practice guidelines.4 The identification of latent safety threats in clinical environments,5 6 including testing of new facilities prior to opening,7 is also well described. Such research is usually positioned as providing ‘proof’ that simulation ‘works’ for improving patient care. However, confounders and balancing measures may not be rigorously examined in this enthusiasm to demonstrate cause and effect.

How, why or when simulation works for improving care is a more nuanced question

Team training using simulation can enable improvements in provider behaviours,8 including those described by Ajmi et al,1 where sequential tasks in time-critical patient journeys can be replaced by ‘parallel processing’. Location is also relevant, as ‘in situ simulation’—“taking place in the actual patient care setting/environment”9—affords a closer connexion to the physical environment for ‘real’, often ad hoc teams, to identify enablers or barriers for QI interventions.10 However, the effect could also lie in simulation shaping the culture and relationships 11 12 that underpin and support structural or process specific interventions.

Most reported simulation-based improvement efforts describe an intervention with a pre/post method of outcome assessment, without a control group, and researchers often imply that their success was a result of their approach, rather than merely associated with it.

As with most examples, it is challenging to tease out the specific aspects of the simulation programme design and facilitation that may have contributed to the impressive outcomes presented by Ajmi et al. Perhaps it was insightful debriefing around team behaviours and leadership for neurology registrars that made the difference? Or maybe it was increased familiarity with the real environment? It could be that getting to know each other’s names and faces through repeated, low stress practice was the key? Or maybe the true effect came from the programme acting as a strong value-signal for the organisation—a message that improving stroke care was a priority thereby facilitating the adoption of the new stroke protocol and motivating practitioners towards that collective goal? It’s probably a bit of all of the above, and more.

What exactly do we mean when describing interventions for QI as ‘simulation-based’?

The term ‘simulation-based’ is a broad descriptor, including software-based computer modelling,13 skills training and teams practising patient care. Even when more narrowly focused on in situ simulation, the heterogeneity of simulation delivery approaches makes comparisons difficult, and may reflect simulation provider capacity and experience, rather than careful matching of method to objective. Most examples are reported in simulation journals, with simulation expert authors, rather than institutional QI teams, perhaps risking a method being ‘pushed’ at a purpose rather than ‘pulled’ into existing QI processes. The extent to which established QI methods such as healthcare failure mode and effects analysis or Plan-Do-Study-Act14 approaches are integrated within simulation-based interventions is variable, although with some exemplars.15 16

What do we mean by ‘works’ when evaluating the effectiveness of simulation-based interventions for QI?

Evaluation through an educational lens considers outcomes ranging from learner preference to patient outcome metrics. However, few educational interventions are measured at the latter end of this spectrum. McGaghie et al performed a qualitative synthesis of simulation-based medical education ‘translational science research’, reviewing whether results achieved in the educational laboratory (T1 outcomes) transfer to improved downstream patient care practices (T2 outcomes) and improved patient and public health (T3 outcomes).17 They found that these patient-oriented outcomes were “more likely when simulation-based medical education interventions are embedded in rigorous educational and health services research programs that are thematic, sustained, and cumulative”. Hence, ‘one off’ targeted simulation interventions may improve focused, narrow process goals, but might miss opportunities to cultivate other outcomes such as culture change or transferability to other care processes. McGaghie et al also describe cases in which “T3 health services research outcomes can be achieved without obvious educational interventions”,17 recognising that improving care is not just achieved through better training of health professionals. This is a challenge to the dominant paradigm for the use of simulation in healthcare as an educational tool.

Simulation programme leaders are well aware of the need for return on investment and demonstrable patient outcome improvements,18 but this can result in overemphasis on easily measurable time-based targets.19 Ajmi et al are working in a field where outcome measures are well defined and quantifiable, but this is not always the case. Interestingly, this challenge has led simulation providers to undertake scholarly work to identify relevant process, outcome and balancing measures. For example, a team attempting to improve paediatric trauma care through simulation recognised lack of an accepted definition for what constitutes a high-quality stabilisation of a traumatically injured child. This gap prompted them to shift research course and focus on deriving such quality indicators within which to situate their improvement efforts.20

While ‘hard’ clinical outcomes are most visible, other outcomes of simulation—team performance, organisational culture and fostering relationships—that do not lend themselves as easily to common QI measures may be under-reported. For example, the simulation programme presented by Ajmi et al, while focused on stroke in the emergency department, may also have outfitted the neurology team with skills and confidence to more adeptly manage other neurological emergencies such as status epilepticus on the wards.

So, how should simulation programmes and staff conceptualise their role in QI?

Historically, healthcare simulation has claimed connection between its primarily educational focus—on improving skills, knowledge and attitudes in healthcare professionals—and improved patient safety, but in many cases this assumed link is hard to demonstrate.21

The Society for Simulation in Healthcare (SSH) describes a ‘systems integration’ role for simulation: “those simulation programs which demonstrate consistent, planned, collaborative, integrated and iterative application of simulation-based assessment and teaching activities with systems engineering and risk management principles to achieve excellent bedside clinical care, enhanced patient safety, and improved metrics across the healthcare system”.22 The SSH accreditation standards and the International Nursing Association for Clinical Simulation and Learning both emphasise governance structures and reporting relationships as evidence of translational impact. For example, “…clear evidence of participation by Simulation Program leadership in the design and processes of quality management system improvement activities at the organisational level”.22

Words matter, and using terms like ‘translational simulation’23 may allow simulation programmes to identify which of their activities are more focused on QI and which are more educationally focused. Both aims are laudable, but clarity of purpose can lessen the risk of diluting impact or disenfranchising stakeholders through trying to be ‘all things to all people’.

In addition to a clear mission, institutional relationships and governance structures supporting integration with QI, simulation programme leadership also needs extended skills for QI roles. Practice and research methods in QI have emerged from different traditions to those in healthcare simulation, and are underdeveloped in most simulation provider faculty development. Similarly, QI practitioners could benefit from perspectives and training in simulation and debriefing. Skills development is likely to be enhanced if simulation and QI staff in the same institutions have opportunities to work closely together.

Future directions for research and scholarship

Numerous case study examples of using simulation to improve patient care are now being synthesised in systematic reviews,10 yielding practical guidelines for using in situ simulation in QI, and building conceptual frameworks for how impacts are achieved. Sollid et al reported on a consensus process “to define priorities in healthcare simulation that contribute the most to improve patient safety”,24 which included two of the five identified priorities clearly at the interface with QI: effectiveness and system probing. Collaboration between practitioners, stakeholders, researchers and journals in these respective fields will enhance progress. We offer our personal reflections for individuals and teams interested in integrating simulation and QI in box 1, to complement providers primary training and experience.

Box 1

Suggestions for QI and simulation practitioners interested in closer integration of their fields

Read add articles found in quality/safety or simulation journals that integrate both fields onto your reading list.

Study seek out professional development opportunities: courses, workshops, conferences in QI methodology or simulation/debriefing.

Collaborate identify individuals in your local institution and find ways to work (and research) together.

Engage— connect with the larger community of practice working on these topics via in-person meetings or platforms such as Twitter and LinkedIn.

So, can simulation work as a QI tool?

Undoubtedly yes, but not always. We look forward to understanding more about when it works (and doesn’t), together with why, how and in which contexts. The thoughtful integration of simulation and QI as fields of practice and research has the potential to enhance the contribution of both to improving patient care.

References

Footnotes

  • Contributors All authors contributed to the writing and review of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests VB is Professor of Emergency Medicine and is employed by Gold Coast Hospital and Health Service where she works in healthcare simulation. She is also co-producer of the Simulcast podcast (www.simulationpodcast.com). EIP is a senior resident in Emergency Medicine at Queen’s University and recently completed a fellowship in translational simulation at Gold Coast University Hospital. KB is employed as Chief Quality Officer for NYC Health+Hospitals/Jacobi, and Clinical Director, NYC Health+Hospitals Simulation Center. She is also Associate Professor at Albert Einstein College of Medicine

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles